chapter
stringlengths
1.97k
1.53M
path
stringlengths
47
241
Symptoms For most people with little experience with boards of directors, the belief is that the main work of boards takes place at its official, formal meetings. In actual fact a great deal of important board work takes place before and after those meetings. Nevertheless, the quality of formal board meetings can make a considerable difference to a board’s success. At the very least, having to sit in on a number of poorly run meetings can destroy the commitment of even the most dedicated supporter of an organization’s cause. Meetings that are poorly organized, go on too long, go off on tangents instead of sticking to the point, feature personal conflicts or domineering individuals turn people off and can cause serious damage by leading to poorly considered decisions. • A high percentage of agreement with the following statements indicates a board that might have problems in carrying out its meetings effectively and efficiently. • The agenda for board meetings does not get into the hands of board members in time for them to familiarize themselves with the issues before the meeting. • When the agenda does come, there is too much information to digest or not enough to adequately familiarize board members about the issues. • The agenda for meetings is too full of “routine” motions or items “for information only” so there isn’t time to discuss more important matters. • The agenda items of greatest importance often come up too late in the meeting when board members are too tired to concentrate on them. • We have problems when it comes to attendance at board meetings; too many members miss too many meetings. • Board meetings often go on too long. • Once the board has finished discussing something, it is not clear who is going to do what and when. • There is too much unconstructive arguing among some members during meetings. • Meetings are run too informally, for example with more than one person talking at once, no time limits on discussions, etc. • Meetings stick too much to formal “rules of order” so that thorough, probing discussions are discouraged. • A few members seem to dominate discussions and this discourages quieter board members from contributing. In essence, problems with board meetings can be grouped under five headings: 1. Agenda clarity and timing Critical to the success of any formal meeting is having a clear agenda that organizes the planned content of the meeting. Also, if the agenda document is delivered to participants too late (or, even worse, is not made available until the beginning of the meeting), people cannot prepare adequately. 2. Supporting information There is also a need for the agenda to contain enough documentation on the matters to be discussed to get everyone “up to speed” on them before they are brought up. Nothing renders a board ineffective more than members scrambling to read important materials at the same time as an issue is being discussed or, worse, not having important material available for them to read beforehand. On the other hand, it is also possible to provide too much in the way of supporting materials with agendas—materials that are not really relevant to the matter at hand but that the agenda preparers misguidedly think “might be useful.” The regrettable tendency of many board members, when faced with a huge pile of documents that do not obviously relate to the issues at hand, is to give it a glance at best. 3. Meeting content The most common problems with board meetings is that too much time is spent listening to reports “for information only” (i.e., that do not require any decisions other than a motion to “accept the report”), or discussing matters that could better be discussed and decided upon by the CEO and her/his management team or a committee of the board. The ideal meeting puts the most important matters requiring motions and decisions as close to the top of the agenda as possible and provides enough time for careful deliberation. To be sure, some of the matters that are “for information only” could be conceived of as necessary in that they help the board carry out its due diligence function of ensuring that everything is running well and according to plan. Identifying these matters requires careful thought. However, certain reports from committees or managers may not necessarily have to actually take up the time of board meetings to deal with them unless they contain motions requiring board level decisions (see discussion of “consent agendas” under “Treatment” below). 4. Clarity and effectiveness of decisions made at board meetings Even when the agenda and the meeting content are well-designed, board meetings can be less than successful if the decision-making process is flawed. For example: • Meetings that are dominated by a small group of “talkers” while quieter members with useful things to say are not drawn into the conversation; • Not enough time is scheduled for a full discussion of an issue; • The discussion goes off on tangents that are not relevant to the issue; • Decisions are reached but lack clarity about who is going to do what and when; • No follow up is provided to permit the board to check on progress made in implementing decisions taken at prior meetings. 5. Attendance If there are problems with meetings in any or all of the four areas discussed above, they may result in poor attendance at board meetings—too many people missing too many meetings. Whatever the cause of poor meeting attendance, it is a good indicator of possible problems in the way the board is working. Diagnosis Since very few people like taking part in meetings that are too long, confusing or boring, why do so many boards seem to get into situations where this is exactly what happens? The simplest explanation is that these practices become part of the culture of the board and no one seems to recognize that they could be changed. Typically, when a nonprofit organization is young and being run by a small handful of enthusiasts who are willing to do anything and everything needed to keep things going, informal board meetings that deal with everything and have to move quickly from crisis to crisis are common. As the organization evolves and professionalizes to the extent that it can afford to hire a paid CEO or develop committees to whom things can be delegated with confidence, the old meeting practices of the board fail to change even though they become more and more inappropriate for the situation. Another major reason for poor meetings is lack of a good meeting facilitator in the role of the board chair. Our research and that of a few others on the role and impact of the board chair suggests that meetings become ineffective when the chair is either under-controlling (lets the meeting get off track, or allows a few members to dominate) or over-controlling (the meeting becomes very formal and rigid). Poor chairing can occur when the skills and aptitude needed for effective meeting leadership are overlooked when choosing a chair or the chair has not had time to develop the skills needed for the role. Meeting management skills are very learnable if proper training is provided but many of those who end up accepting nomination for the Chair or President role may not see they need to develop them. A more detailed discussion of the role of the Chair appears in the chapter on Leadership in these Guidelines. Sometimes the reason for ineffective board meetings, unfortunately, is the presence of a CEO (paid top manager) who, consciously or unconsciously, does not want a strong board but rather one that is dependent on him/her for information and guidance in all its deliberations. Such CEOs can provide too little information on important issues which leads to “rubber-stamp” decision-making. Conversely some CEOs try to manipulate their boards by providing too much information—“a snow job.” They can also bias the information that is provided so as to favor one position on an issue over another. Also, they can strongly influence the selection of new board members so as to ensure that only those with the same point of view on issues as themselves are chosen. All of these actions by CEOs can reduce board effectiveness. Treatment The goal of official meetings of the whole board should be to focus on issues that have implications for the strategic direction of the organization or that create understanding about an issue or situation the organization is facing. Even with a clear focus on matters of strategic importance, the effectiveness of boards can be influenced by a number of factors, including meeting frequency and times, meeting length and design, and meeting rules and attendance. Meeting Frequency and Times There is definitely no fixed rule about the optimum frequency of official meetings of the whole board. Actual practice can vary from monthly to annually. The governing criterion, which can be stated in by-laws or board policy manuals, ought to be that the board should hold a formal meeting when it has enough business to warrant doing so. For example, in working boards, meetings could occur quite often. In governance-only boards, they may occur less often. Chairs and CEOs can recognize if they are calling too many board meetings if they have an attendance problem or if they find themselves thinking, “Another board meeting coming up. How can we fill up the agenda this time?” In the case of some governance-only boards in very stable environments, meetings might be only three times a year: a meeting to approve the strategic plan; an interim progress report meeting; and an evaluation meeting to assess how well the organization has performed. These, however, are official decision-making meetings. But in today’s complex governance environment with multiple stakeholders and a fast changing and threat-laden world, it is much more likely that issues will come up that will require boards to come together more often and to interact with people other than themselves and the top management team. In Governance as Leadership (2005), Chait, Ryan and Taylor suggest “landmarks” or “characteristics of an issue” boards should recognize as opportunities to engage in what they call “Generative Governing”: • The issue or situation is ambiguous or there are multiple perspectives on it; • The issue or situation is salient in that it is important to different people or constituents; • The issue or situation is high stakes in that it relates to the organization’s purpose or core values; • The issue or situation may be polarizing and there is a need to bring people together; and • The issue or situation is irreversible in that it cannot be easily changed after a decision is made (p. 107). For this reason, many nonprofit organizations today find that it is useful to differentiate between decision-making meetings and “information briefing meetings” held for the purpose of becoming informed about a single important strategic issue. These latter meetings are usually characterized by less formal discussions and feature input from invited staff, experts from outside or representatives from clients, members or external stakeholders. Specific motions are not debated; instead, information is provided, alternatives identified and opinions sought. This is all fed to relevant board or management working groups, who then develop specific policy recommendations in the context of the organization’s strategic plan. Formal discussion and voting on such recommendations occurs at one of the decision-making board meetings. The question of the time of board meetings is important when board membership is diverse and everyone’s time of availability does not fit the same period of the day, or day of the week (mothers caring for children unable to attend midmorning meetings, shift workers unable to attend evening meetings, or others unable to meet on weekdays). The organization must be conscious of the need to vary meeting times in such circumstances so all board members have an equally fair opportunity to attend. It is also worth considering the use of modern communications technology such as conference calls, Skype, and online meeting applications etc., as a means of allowing participation by those not able to physically attend. Meeting Length Another indicator of board meeting mismanagement is meeting length. The span of time that the average person can focus on complex decision-making tasks without losing their clarity of thought is no longer than 50 minutes, though this can be extended somewhat with refreshment breaks. Board meetings that regularly last longer than two hours can be an indication of problems. Either too much time is being spent on issues that do not need to be considered by the whole board or there are too many items that involve long-winded reports “for information only.” Alternatively, the regular occurrence of long debates that extend meeting times may indicate badly-worded motions or poorly-prepared reports that do not contain enough supporting data. When these kinds of long discussions occur often, attention should be paid to how to improve the work of the committees or managers who prepare the agenda items in question. Occasionally, some CEOs seeking rubber stamp approval of their recommendations on contentious issues deliberately create long agendas. They then insure that the issues they want the board to rubber stamp are placed at the end. By that time, no one has the energy to think, let alone discuss and object. The “consent agenda” option When board meetings go on too long because too many of the items being presented are “for information only,” a solution increasingly adopted by many is the introduction of a “consent agenda.” The following is an example of the use of a consent agenda suggested by board expert David Renz (2006): When a by-law or some other rule or regulation requires formal approval by the board, yet there is no value added by engaging the board in discussion about the item (e.g., a routine lease renewal for a facility already included in the approved agency budget). The procedure is to have all items of this type sent beforehand to board members. When these items come up at the meeting, there is no oral presentation or discussion of the information. Instead, it is taken as understood that the information has been reviewed by members beforehand and will only be discussed if anyone has a question or wants to comment on it. Consent agenda items are usually put forward at the beginning of the meeting. Use of a consent agenda can save large amounts of time though the disadvantage is that it might hurt the feelings of those who prepared the reports and would like to have their “moment in the sun” before the whole board. A conscious effort to recognize and praise the work of individuals and committees that prepare material that is included in a consent agenda can help mitigate this problem. Another tactic for controlling the length of meetings is to have the agenda preparers estimate the length of time that will be needed for presentation and discussion of each item and insert the “estimated time” on the agenda document. These estimates should be treated as guidelines only, however. Sometimes issues end up requiring more than the time allocated to them. Rather than arbitrarily cutting off important discussion, it is better for the Chair to ask permission of the meeting to extend the time, then either postpone discussion of less important matters or reduce the discussion time on them. The flip side of meetings that go on too long is the meeting that ends too soon. Meetings that the board rushes through in, say, half an hour could be an indication of a rubber stamp board. If this happens regularly it might be that the board has been conditioned not to question whatever is put before it or simply that there should be fewer meetings. Meeting Design As discussed above, one of the most common complaints of board members is that meetings are “not properly organized.” Specific problems include the following: • The agenda does not reach board members until very shortly before, or even at, the meeting so they have no time to prepare; • The agenda contains too much information that is irrelevant to the issues to be discussed or there is not enough relevant information; • The order of the agenda items places unimportant and routine items at the top while important ones are at the end, when energy tends to run out; • Meetings fail to follow accepted “rules of order” so can become too disorganized; or, conversely, are too formal or rule bound, thereby discouraging full and frank debate. Except in rare emergency situations, there is really no excuse for not getting agendas into the hands of board members three to five working days before the meeting. It is often helpful when planning the content of the Agenda to request board members to submit suggestions for matters needing discussion. Agendas should be organized so that items requiring decisions are put at the top. All supporting material should be directly relevant to the impending discussion. Meeting Rules Even the most informal working boards should adopt one of the standard authorities on “rules of order” for meetings, such as Roberts Rules (http://www.robertsrules.org/rulesintro.htm) to be used as a guide in conducting official board meetings. It is also important that the Chair, or a designated other person, be familiar with these rules and how they are applied. This, however, does not mean that all meetings must be run in strict accordance with these “parliamentary” rules. The rules are primarily of benefit when the items to be discussed are likely to be highly controversial with a lot of disagreement among board members. As in any emotion-laden debate, rules are needed to make it fair. These would include: how often a person can speak, rules regarding how amendments to motions can be made, when and how a motion can be tabled, what constitutes being “out of order,” etc. In most non-crisis situations, however, a much more relaxed approach can be taken to meeting rules provided the informal culture of the board is one that values an orderly, business-like approach. Meeting Attendance Spotty attendance by a high proportion of board members is usually an indication that a significant number of members are dissatisfied with the board and/or their role on it though sometimes it indicates logistical problems like meeting times that don’t suit a number of people. Some consultants urge compulsory attendance rules as a way of getting the members out to meetings, e.g., “Members must attend at least 2/3rds of the meetings each year or resign unless a valid excuse is provided and accepted by the Executive Committee.” This may get out the members but can mask the real problems behind low commitment. The Table 7 contains additional useful information and resources to increase the governance effectiveness of the organization through high quality board meetings. Table 7: Additional Resources to Improve the Quality of Board Meetings Topic Country Source Website General Guidelines: How to Conduct Effective Meetings U.S.A. Free Management Library http://managementhelp.org/misc/meeting-management.htm About.com http://nonprofit.about.com/od/boardq...redom.htm?nl=1 Australia Victorian Public Sector Commission http://www.ssa.vic.gov.au/governance...on-making.html Britain Accounting Web www.icsa.org.uk/assets/files/pdfs/090928.pdf Meeting Planning and Scheduling U.S.A. Cause and Effect http://www.ceffect.com/wp-content/uploads/2014/03/board_meeting.pdf Meeting Rules U.S.A. Houston Chronicle http://smallbusiness.chron.com/nonpr...les-21608.html Consent Agendas U.S.A. Midwest Center for Nonprofit Leadership bloch.umkc.edu/mwcnl/resource...ent-agenda.pdf
textbooks/biz/Management/Book%3A_Guidelines_for_Improving_the_Effectiveness_of_Boards_of_Directors_of_Nonprofit_Organization_(Murray_and_Harrison)/07%3A_Effective_Board_Meetings.txt
Symptoms A major component of effective boards is having the right combination of people on them and providing them with ample opportunity to learn what they need to know to be good governors. The two basic requirements for all board members is that they becommitted to the mission of the organization and have the time and energy to devote to the work of the board. After that, the specific mix of leadership competencies that is best for a given board in the environment in which it works can vary greatly from one organization to another. Even though most boards probably have members who get along quite well some may have a sense that, as a group, the mix of people might not be ideal. They may feel that the board as a whole is lacking in expertise, is not diverse enough and/or has not received the level of orientation and training it needs to become highly effective. A high percentage of agreement with the following statements indicates that board composition and development might be improved: • Looking at the board as a whole, there is not enough “new blood” coming on to it to bring fresh energy and ideas. • Finding high quality new board members is a problem for us. • We do not pay enough attention to making sure we get the mix of skills and backgrounds we need in the new board members we recruit. • The diversity of public with an interest in this organization is not well represented in the make-up of the board. • We don’t do a very good job of orienting and training new board members. • There is not enough ongoing professional development and training for regular board members. Diagnosis There are several possible reasons for issues relating to composition and development of the board: As with so many problem areas in board governance, the root of difficulties with the composition of the board may lie in the culture that the board has evolved. Unspoken shared attitudes may exist such as, “We can always find good members just by asking our friends”; or, “It’s too hard to find new people so let’s just keep the ones we have”; or, “New members can easily learn the ropes just by sitting in on our meetings and watching how we do things.” Where do such attitudes come from? Sometimes there are informal subgroups within boards—older members who have been around a long time, for example—whose opinions dominate the rest of the board. Since their selection and training was informal and based on connections to prior board members, they feel there is no need to change. Another source of the difficulty in developing a better mix of members and a trained board is failure to allocate responsibility for doing this to a specific role (e.g., chair) or board committee (e.g., governance committee). Thus, even though the board may sometimes talk about developing the board or recruiting different kinds of new members, it may not be knowledgeable or organized enough to implement those practices. Board by-laws that make no provision for limited terms of office for board members invite the possibility that the board will not attempt to rejuvenate itself. It could also be the case that the board does not carry out regular assessments of its own performance and is thus “blind” to the need for change in board composition selection criteria or development practices. A special kind of blockage to changes in board composition can arise in relatively new organizations dominated by one or more “founding fathers.” The problem created is called Founder’s Syndrome. It occurs when the founding fathers (or mothers) of the organization have created a culture, like that described above, in which board turnover is believed not to be necessary. Instead, the founders see themselves as the “keepers of the flame” and don’t want to risk newcomers making changes in how things are done. Treatment To increase the effectiveness of the governance function, consider the following approaches to board composition and development. Board Composition Critical to having a successful board is getting the right people on it in the first place. The difficult part is deciding who will be “right” for the organization. Too often the tendency is to appoint members who resemble existing members or who are suitable for conditions as they were but who may not be suitable for a changing future. There is a good deal of advice available to those who are seeking to put together a successful board but there are only two universal criteria which are supported both by research and the “how-to” authors: Board members must be committed to the organization’s mission, i.e., they must believe strongly in what the organization is trying to do and seriously want to help. A board dominated by people who sit on it as a favor to a friend or because they believe it will look good on their resumé will not usually be effective. Prospective members must have the time and energy to devote to the board’s business. Establishing board recruitment needs The first step in finding the best potential members for a board is to be clear about the kind of people one is seeking in the first place. One way to do this is by compiling a board needs document. Table 8 provides one example of a simple recruitment needs grid that can be used to identify possible composition gaps on the board. Table 8: Sample Board Member Recruitment Grid Composition of the Board Board Members Potential Board Effectiveness Criteria A B C D E F G Stakeholder Connections Clients/Users/Audience Funders Actual/Potential Partners Governments or Other Regulators Community Leaders Business Leaders Other Useful Skills and Experience Fundraising Public Relations/Community Relations Leadership Finance/Accounting Planning/Strategy/Innovation Legal Marketing/Social Media Human Resources Performance/Project Management Data Analytics/IT Other Demographic Representation Balanced diversity among the desired demographic characteristics such as: Gender Geographic Location (region, district, state, country) Age Ethno-Racial Background Socio-Economic Background Down the left hand column of Table 8 are listed some of the kinds of background characteristics of individuals that might be important to have on the board. The actual characteristics chosen should be decided on by each organization as they might vary depending on local conditions. Across the top of Table 8 are listed the names of the current board members. The committee doing the nominating then informally assesses the composition of the board by checking and describing the extent to which each member possesses each of the characteristics in column 1. The board could then be “rated” according to the extent to which it meets these criteria: in terms of being “high” (present board members fulfill each criterion), “medium” (some criteria are met but not all) or “low” (a number of the criteria are not met by the present board members. A careful examination and discussion of the results of this process should give the committee an indication of the “gaps” in desirable background characteristics needed by the board at that time. This becomes the basis for the subsequent recruitment drive. Note The needs matrix discussed above will identify potentially useful characteristics in future board members. However, to know that someone possesses a certain qualification or experience does not necessarily mean that they will perform well as a board member. They must be able and willing to ‘do’, not just ‘be’. There are also many other important questions to answer when it comes to finding the ideal mix of people for a board. The main ones are discussed below. Should boards be composed primarily of “important” people? Having many “big name” people on the board can help in giving the organization credibility and a high profile in the community. And some, if not all, “names” have valuable talents. The dilemma is that many of these people may be on other boards or are so busy in their day jobs that they don’t really have time to do much more than make token appearances. Many organizations elect to keep the percentage of “prestige” members relatively small and tolerate minimal involvement as the price that must be paid for their ability to provide contacts and credibility. The majority of the board carries the workload. Of course if the “busy names” become the majority of the board, this can often lead to a rubber stamp board that simply approves recommendations brought to it by management. The other approach is to put the prestigious names on an “Advisory” or “Honorary” Board comprising those who can give useful help with specific matters (such as fundraising) and heighten the organization’s profile but who are not expected to govern. What is the right amount and kind of diversity to have among board members? It is generally agreed that boards should represent the diversity of the people that they serve but research has established that many boards do not achieve this representation (Bradshaw, Fredette & Sukornyk, 2009). Instead, the majority of their members have similar demographic and other background characteristics (usually middle class, middle-aged, well-educated, with business or professional experience and of European ethnic origin). To what extent this affects the board, or organization’s performance depends on how diverse the populations are that the organization serves. The hypothesis is that a non-representative board will increase the chances that the agency will serve the needs of non-mainstream communities poorly. Put in positive terms, the advantage of expanding a board’s diversity along ethno-racial, social class, gender and other dimensions is that this will improve the board’s “boundary spanning” function and lead to better strategic leadership. On the other hand, the fear associated with a very diverse board is that these new kinds of members won’t always understand how the board operates and won’t be able to make decisions in the best interest of the organization as a whole. Again, there is no research evidence that this frequently happens. Differences in background may sometimes make it more difficult to develop a comfortable, open, problem-solving climate but it is not impossible. Given careful selection of the individual nominees, placement in the “right” role (also known as functional inclusion), and an adequate board development program, a diverse board can be much more effective than a homogeneous one (Fredette & Bradshaw, 2011). How much should the board be made up of “stakeholders” who have specific interests in the organization, as opposed to more general “community representatives”? Stakeholders consist of organized interest groups, e.g., on a university board of governors, there would be representation from the student government, the faculty association, government departments, the alumni association, support staff association and associations representing the community. Again, the positive side of organized stakeholder representation is it promotes “bringing the outside in” and “taking the inside out.” Once more, the downside risk is the possibility that the representatives will feel they must act solely in what they see as the interests of the organized group they represent. Hard data on the extent to which this actually happens is very scarce. The probability is that problems arise only infrequently, but stakeholder organizations can cause major upheavals during crisis periods such as downsizing, opening or closing programs, or shifting attention from one client group to another. Again, great care in selecting the individual representatives and thorough board training in putting the interests of the organization first can help minimize the frequency of destructive approaches to conflict during periods of change. How well should candidates know the organization and the field that it is in? Another dilemma is the extent to which the board should consist of members who already have an in-depth knowledge of what the organization does and how it operates. For boards using the working board model, this is quite important, at least for selection of the majority of their members. For Governing Boards, it may be impossible, other than by choosing internal stakeholder representatives. A majority of Governing Board members will not be “experts” in the organization they govern or the “industry” in which it operates. This raises the question: how can they provide strategic leadership? As noted, the solution to this problem lies in thorough orientation and provision of at least partially independent information systems for the board. How much should “business skills” be emphasized? What is the extent to which board members should possess specific skills or knowledge based on their employment or training in areas such as business administration, corporate law, accounting, marketing, human resources, performance management, IT, and public and government relations. One school of thought says this kind of talent is very useful for providing the executive director with invaluable free advice on all sorts of management issues. The other says it is overrated and runs the serious risk of creating a board which is going to be primarily interested in management and unable to focus on governance issues. Again, there are no data to support either of these assertions so probably there is not a universally correct mix. Organizations with working and mixed model boards are, by definition, deficient in certain operational leadership and management skills so board members who can help fill such gaps are important. Even in large professionally managed institutions there can be certain areas of specialized knowledge that the organization cannot afford to pay for but which a board member might possess. As a general rule, it is wise to be aware of the skill, knowledge and abilities of the Executive Director and his/her management team. The better they are, the less need for board members to fill gaps. When it is necessary to select board members with specific skills, the key is to train these useful specialists to understand that their expertise will be sought in the roles of advisors or implementers only, not as decision-makers. A note on the need for board candidates being willing and able to donate money to the organization In some cultures and certain nonprofit organizations that depend heavily on donations from the public or corporations, there is an expectation that board members show their support for the organization by making a personal donation of money. The belief is that showing the public that all board members make donations will help in fundraising appeals. There are two points to consider when thinking about making board member donations a criterion for member selection: Though most professional fundraisers tend to believe that publicizing a unanimous board donation record makes a difference to how much external donors will give, there is no actual evidence from empirical research showing that this is so. If showing unanimous donor support is deemed to be necessary, it is not necessary for such donations to be large. They need only be what each member is willing and able to afford. What individual personal qualities to look for? Developing broad criteria for board selection such as those discussed above is important but, in the end, the most important criteria are those that are the most difficult to specify and measure in potential candidates for membership. These are the personality characteristics that one wants to see in board members. Everyone who has ever spent much time watching different boards come and go in an organization will agree that, some years, the majority of board members seem particularly quick to understand issues, be creative and constructive in their handling of differences and business-like, while in other years the opposite qualities prevail. Since most boards don’t like to check carefully into the personal qualities of the people they nominate, it is almost a matter of chance how well the mix works out in any particular year. What is needed, clearly, is: (a) an attempt to articulate the kinds of personality characteristics and personal values that are being sought and (b) a serious attempt to state how they will be discerned in any given nominee. Under heading (a), the following are some of the qualities that we found were associated with high impact leadership on nonprofit boards: • Honest, helpful, and humble. • Self and socially aware. • Able to “see the big picture.” • Creative and open to change. • Able to communicate, work well with others, and handle conflict respectfully. Regarding (b), there is not enough space here to provide a full review of the most valid methods for assessing these characteristics in people; most textbooks on human resource management will do that. Suffice it to say here that the essence of the process lies in how the candidates’ past behavior is checked through references. This process needs to be systematically thought out in advance and implemented with care. The all-too-common method of nominating someone whom one other board member believes “is a wonderful person” just is not good enough. These days, most people called to provide references are loath to communicate negative things especially in writing. However, some might be more open sharing useful information in response to oral conversations in which they are asked questions that relate to specific actions, e.g., “What role did ‘x’ play in your strategic planning process?” or “How was ‘x’ involved in your fundraising activities?” • A carefully designed board recruitment process looks something like this: • It is carried out by a Governance (or Nominations) committee of the board. • The committee looks at the strengths of the existing board and tries to identify gaps in skills, abilities and background that need to be filled. • A widely broadcast call for nominations is made highlighting the qualifications sought. • Those making nominations are asked informally to provide information on why their candidate(s) is suitable using the criteria in Table 8. • Those who are willing to act as references for the potential nominee should be asked how they perceive him/her in specific situations. A short list of suitable nominees is created and ranked as strong, medium or weak candidates. Each person on the list would then be approached by the board chair in the order in which they are ranked. The special problems of low profile organizations Unfortunately, for a large number of worthy but low-profile organizations that support less popular causes, the problem of board composition is not one of how to choose among a range of possible candidates; rather it is to find enough people of any kind who meet the basic criteria of commitment to the organization’s mission and willingness to devote enough time and effort to the cause. This is a problem of recruitment, rather than selection. Solving it requires developing a focused, formal recruitment program for board members. The usual method employed by successful nonprofits of this type is the “grow-your-own” approach. This is accomplished by concentrating on getting a lot of working volunteers to help with programs and projects. The best of these are then identified and systematically wooed and trained to accept increasing amounts of responsibility, including the leadership of others. Before long, those with skills and attitudes required on the board can be asked to join it (which, in these situations, is almost always a working board). In desperation, one can trust recruitment to the efforts of a few board members to pressure their friends to join, but don’t expect a very effective board as a result. A final word on board composition Though there are no hard and fast rules about how a board should be made up, there is probably one generalization that fits all voluntary organizations that are facing rapidly changing, often threatening, environments: strive for balanced diversity. The exact kind of mix will vary from situation to situation, but a mix it should be. Older, younger; men, women; rich, poor; “old hands,” “young blood”; business and non-business backgrounds; multi-ethnic and multi-racial—the criteria can vary. But only with a balanced mix can the organization improve its chances for getting the fresh ideas and specialized information it needs to cope with its changing world. Remember, however, that to make it all work, the board needs training in how to work together as a team and in how to discern the greater good of the organization as the basis for making all decisions. Board Development Even though boards may manage to find the ideal mix of skilled and committed people to become members, they may still end up losing some of them or having them perform ineffectively. This may be because members do not know what is expected of them, or lack the skill and knowledge needed to make good decisions in the governor role. The most direct way to deal with this problem is through a well-planned system of board orientation, development and evaluation (Brown, 2007, Brudney & Murray, 1998; Green & Gresinger, 1996; Herman, 2005; Herman & Renz, 1998, 1999; Herman, Renz & Heimovics, 1997; Holland & Jackson, 1998; Brudney & Nobbie, 2002). The components of such a system are: A board manual which provides full background information on the organization and its articles of incorporation and by-laws, current programs and plans, descriptions of the position of board members, and outlines of the responsibilities of board officers and committees, and minutes of recent board meetings. A formal orientation program at which new board members meet top management officials, tour facilities and hear presentations on the organization’s programs and background information on strategic issues. Also helpful here are informal “mentoring” programs, which pair new members with current members. A good mentoring program will “train the trainer” by providing the mentor with a checklist of topics to discuss and the necessary information to cover. Periodic formal occasions at which the board assesses its own performance, for example by using Board Check-Up or questionnaires covering much the same topics as the content of this guide. Feedback from the management team and staff and stakeholders who interact with the board should also be obtained. Also useful in helping boards get a realistic picture of how they are doing is obtaining periodic feedback from key external stakeholders on how they view the board’s work. Differences in perceptions of board effectiveness between the board, staff and external stakeholders are often an indicator of a potentially harmful situation that should be addressed. Finally, there is value in creating a “buddy system” for new board members in which individual existing board members with good knowledge of how the board and organization works are formally asked to mentor new members during their first year on the board. There is also a need to assess the performance of individual board members with an eye toward continuous growth and effective support of the organization. This can be done by having board members self-assess their performance informally or through formal analysis of data collected from a questionnaire. It is remarkable how honest many are willing to be. However, attendance records and feedback from the Chair and Committee Heads can also be useful. The main problem is that assessing individual performance often feels like a very awkward thing to do because members are volunteers, have egos and a certain amount of prestige in the community. It is not impossible, however, if board members are shown when they join that there is a formal system of board self-evaluation and understand how the information obtained through it is to be used. This problem can be overcome if the culture of the board is understood to be one of support for each board member in order to help them contribute as best they can. Rather than identifying and concentrating on shortfalls, individual performance discussions can be opportunities to set goals for the coming year, identify areas of interest that the board member may want to expand on and new roles they may assume in order to help the organization prosper. Table 9 contains additional useful information and resources to increase the governance effectiveness of the organization through board composition and development. Table 9: Additional Board Composition and Development Resources Topic Country Source Website Board Composition and Diversity U.S.A. Board Source https://www.boardsource.org/eweb/dyn...ur-Ideal-Board www.boardsource.org/eweb/Dyn...b-61a0efe92325 Creating the Future http://www.help4nonprofits.com/NP_Bd_Diversity_Art.htm Blue Avocado www.blueavocado.org/node/762 Britain KnowHow NonProfit http://knowhownonprofit.org/leadersh...%20recruitment http://knowhownonprofit.org/leadersh...eople-on-board Australia SVA Quarterly http://knowhownonprofit.org/leadersh...eople-on-board Board Recruitment U.S.A. Free Management Library http://www.managementhelp.org/boards/recruit.htm Board Source www.boardsource.org/eweb/Dyn...f-8b9d5647c03f https://www.boardsource.org/eweb/Dyn...-Board-Members Bridgespan Goup http://www.bridgespan.org/Publicatio...x#.U2lHvfldW58 Guidestar http://www.guidestar.org/rxa/news/ar...resources.aspx Center on Public Skill Training www.createthefuture.com/developing.htm
textbooks/biz/Management/Book%3A_Guidelines_for_Improving_the_Effectiveness_of_Boards_of_Directors_of_Nonprofit_Organization_(Murray_and_Harrison)/08%3A_The_Composition_and_Development_of_the_Board.txt
Symptoms Board culture is the collection of taken-for-granted attitudes, social norms, perceptions and beliefs about “how we do things around here” shared, usually unconsciously, by a majority of board members. A high percentage of agreement with the following statements might indicate problems with the informal culture of the board: • Too many board members seem unwilling to devote much time or effort to the work of the board. • There are many differences of opinion among board members that never get resolved. The board doesn’t handle conflict very well. • The board does not regularly and systematically assess its own performance and change itself if it thinks it can improve. • Board members tend not to be involved in representing the organization to the outside community or bringing the concerns of that community into the organization. • As far as I know, many board members have contacts among people who might help the organization but they are not encouraged, or given the opportunity, to make use of them. • Individual board members with skills and knowledge that might be of use to the organization are rarely approached informally for their assistance. • Little effort is made to help board members get to know one another and develop “team spirit” as a group. Diagnosis Why do cultures evolve the way they do? There are at least six major sources of influence that can shape board culture without the members being aware of it: 1. Sometimes there are external pressures for boards to act in certain ways that come from critical stakeholder groups such as funders, members or associations of other nonprofit organizations in the same “business.” 2. In the case of boards of relatively young organizations or those with a high percentage of new members, certain beliefs, attitudes or social norms may come from founders or members who have previous experience on other boards. 3. Similarly when most board members share a homogenous background in terms of such things as age, social status, ethnicity, etc., there is a greater likelihood that they will quickly evolve similar attitudes toward the way their roles and responsibilities on the board should be carried out. 4. In some boards, there are members with disproportionately larger resources (largest checkbooks, influential friends, political connections, etc.) who have a greater amount of influence in the group. It is not that they intentionally manipulate others, but nonetheless they may have a larger voice than others in shaping board culture. 5. One of the strongest influences shaping board culture is the behavior of those in the critical leadership positions of Board Chair and CEO. While these positions do not have formal authority to make board decisions, they do carry a great deal of informal influence over the process. 6. Finally, in addition to the influence of the Board Chair/CEO positions, some boards evolve small sub-groups within themselves, sometimes referred to as cliques or “core groups” (Bradshaw, Murray, & Wolpin, 1992). These informal groups are not recognized officially in any way though they may dominate certain committees or formal offices. They are important because the attitudes and beliefs of their members regarding how the board should operate or what position it should take on various issues can significantly impact those who are not part of the group. It is important to realize that “core groups” are not necessarily a bad thing. Sometimes they emerge simply because some people have more time and interest in the organization so do more and find others with a similar outlook to whom they naturally gravitate. Other inner groups develop because they are made up of people with similar periods of long tenure on the board (part of the founder’s syndrome phenomenon discussed earlier). This is especially common when there are no fixed maximum terms of office for board members. New members might come and go but a core of “old timers” stays around. Core groups become a problem when they engage in any of the following behaviors: • Control of information or a willful disregard for providing proper orientation to new members thus leading them to feel like outsiders; • Pushing an “agenda” of their own based on interests or positions on important issues that may not be shared by others; and • Engaging in “backstage” politicking in the form of “secret” informal meetings outside of regular board meetings to plan how to have their positions on issues approved at the meeting. However, whether or not core groups are a positive or negative influence on the board, it is useful to recognize they may exist and discuss the role they play so everything is out in the open. Treatment Changing a board’s culture can be very difficult because, by definition, it is something that has developed over time about which many are not consciously aware. So the first step in the change process must be that of surfacing what has heretofore been taken for granted. How to do this? One way is through the use of fully confidential self-assessment exercises such as the Board Check-Up, University at Albany, SUNY sponsored research project offered online for free by www.boardcheckup.com. This questionnaire, which includes the items above, is especially useful if effort is made to obtain accurate, anonymous perceptions of the board’s culture from board and non-board people who have occasion to observe and/or interact with the board. When results show, for example, that some respondents perceive that there is an inner group that has more influence than others, this finding needs to be put before the whole board for discussion. This discussion should cover the following points: What is the evidence that suggests such a group exists? If there is consensus that it does, why has it emerged? Most importantly, is the behavior of this group good for the board or not-so-good? That is, on balance, do the group’s actions contribute to, or reduce the effectiveness of the board’s decisions and/or individual members in meeting duties of their fiduciary role? The objective of this open discussion of the perception of an inner group on the board is not necessarily to do away with it. Indeed the board might well want to encourage it since they often do more than others are able to do. Instead, the goal is to work on ways to keep them open and communicating with all board members and discourage “backstage” maneuvering. One of the best ways to prevent negative sub-groups from developing is to conduct regular exercises in team building for the board. The more the board as a whole thinks of itself as a team, the more sub-groups within it are likely to be positive, open and sharing. In a similar way, the use of outside consultants may yield insights into the workings of the board that the board has been unable to see for itself. One advantage to having, and enforcing, by-laws that specify fixed terms of office for board members is that there will always be new members joining at regular intervals. Usually new members are expected to adopt social norms or go along with the ways of operating the board has followed in the past. However, if a conscious effort is made to ask new board members to provide critical feedback on their perceptions of how the board is working, a surfacing process might take place and needed changes in board culture might be made. However, if a board does engage new members in a change process, it must be open to new information and careful not to be critical of those who are honestly trying to share their perceptions in a constructive way. Finally, it is difficult to over emphasize the importance of the Board Chair and the organization’s CEO in creating and changing the unspoken culture of the board. Their leadership styles often set the tone for the way in which the board exercises its collective leadership of the organization. To learn more about the competencies of highly effective chairs, boards, CEOs, and leadership volunteers, see the next chapter on leadership. To learn more about culture in nonprofit boards and organizations and how to influence it see the websites in Table 10. Table 10: Additional Board Culture Resources Topic Country Source Website Organizational Culture: General U.S.A. Free Management Library http://managementhelp.org/organizations/culture.htm The Bridgespan Group http://www.bridgespan.org/Publicatio...x#.U27DE4FdX84 http://www.bridgespan.org/Publicatio...x#.U27Ny4FdX84 Britain KnowHow NonProfit http://knowhownonprofit.org/organisa...ulture/culture http://knowhownonprofit.org/leadersh...opy_of_culture Board Ethics Australia Australian Institute of Company Directors www.companydirectors.com.au/D...ure-and-Ethics Team Building in Boards Australia TMS Worldwide http://www.tms.com.au/tms12-3c.html
textbooks/biz/Management/Book%3A_Guidelines_for_Improving_the_Effectiveness_of_Boards_of_Directors_of_Nonprofit_Organization_(Murray_and_Harrison)/09%3A_The_Informal_Culture_of_the_Board.txt
Symptoms Most literature on the leadership function of nonprofit organization boards concentrates on the role of the board as a whole. This emphasis is because, legally speaking, it is the final authority for the organization—even though it may delegate some of its authority to a CEO. Similarly, it recognizes that no one board member may legally act as a representative of the entire board on a given matter unless given authority to do so by the Board itself. Just as in other work groups, however, boards have both formal and informal individual leaders within them—people who have a significant influence over how the group works and how effective it is. For example, as previously discussed, some boards develop influential core groups within them and they can be a positive or negative force for change While these kinds of informal leaders and groups are important to identify, it is generally agreed that the most influential leaders in nonprofit organizations are the board chair and the CEO. In many small voluntary organizations with no paid staff, the board chair and CEO may be the same person (though it should be noted that it is illegal for charities in some jurisdictions such as New York, to have Chair/CEO leadership roles held by the same person). A high percentage of agreement with the following statements might indicate problems with leadership of the board: • There is a kind of “inner group” that seems to run things on the board and those who are not part of it sometimes feel left out. • The board chair tends to be overly controlling. • The board chair seems to have her/his own “agenda,” which is not always shared by others. • The board chair is a bit too passive and/or disorganized in her/his leadership style. • The board chair’s meeting leadership skills are not as strong as they could be. • As far as I know, the board chair is reluctant to speak to board members who don’t carry out their responsibilities properly. • As far as I know, the relationship between the CEO and the board chair is quite formal; they don’t talk much “off the record.” • As far as I know the CEO rarely consults individual board members for informal advice or assistance. • There seems to be a lack of trust between the CEO and the board. • The information that the CEO provides the board to help it make decisions is sometimes inadequate or too slanted. • The CEO seems to be trying to dominate or control the board too much. Diagnosis It is important to realize when respondents check the statements dealing with leadership that they are providing their perceptions only, not an “objective” reality. Also, it must be remembered that the reasons people are perceived as being more or less effective leaders may, or may not, lie within the leaders themselves. In other words, it is possible that situations and circumstances may create conditions that make it difficult for almost anyone in a leadership position to be perceived as effective. It is also possible that a person in such a position might be very effective under one set of circumstances but not in another. The case of Winston Churchill is often presented as the most vivid example of this—a universally acclaimed leader during WWII who was defeated in the polls once peace was restored after the war. Times changed and he was no longer seen as the leader people wanted. With respect to reasons for perceptions of leader ineffectiveness, our research suggests there are five major ones: 1. Lack of role clarity. In our research on board chairs, we found a significant positive relationship between clarity of key actor roles and perception of chair leadership effectiveness. 2. Situational factors. For example, in our research, we found evidence of a negative relationship between CEO turnover and perception of chair leadership effectiveness and impact. This finding suggests that stability in CEO tenure may be associated with being seen to be effective. 3. The board’s own prior ineffectiveness—its failure to adopt “good governance practices” is associated with perceptions of poor leadership. Our research and that of others (e.g., Ostrower, 2007) points to a significant positive relationship between the reported use of good governance practices (e.g., strategic planning, board performance assessment, assessment of CEO leadership and organizational effectiveness, etc.), and perceived leadership effectiveness. Because all of this research was based on one-point-in-time correlational methodologies, however, it is not possible to say whether the presence of good governance practices makes it more likely that leaders will be seen as effective, or whether leaders who are seen as effective are more likely to use their influence to help their boards adopt good governance practices. 4. The personality traits leaders bring to their position. In our research it was found that chairs who were perceived as being honest, humble and helpful were also more likely to be seen as having more impact on the performance of the board, the CEO and the organization. The same relationship was found between perceived chair effectiveness and perceptions of chair’s “emotional intelligence” (i.e., as someone who is self-aware and able to manage others in relationships) and in possession of the traits associated with team leadership (being open, fair, respectful, able to create a safe climate where issues can be discussed, one who recognizes others, and does not distract them from goals, etc.). The findings of this research are supported by general leadership studies that show leader personality to be a strong influence on leader effectiveness (see Miller & Droge, 1996). 5. The involvement of followers with leaders. Our research on board chairs found that when members of a group spend more time with the leader and have more interaction with her/him, they are more likely to see the leader as effective. This could be due to the nature of the role they play. (For example, the CEO, board officers and committee chairs are more likely to interact frequently with the Chair than “ordinary” board members, staff or external stakeholders). This explanation assumes that the “closer” people get to their leaders, the more likely they are to think favorably of them. These results suggest that leaders will benefit from spending time building high quality relationships with others. Whatever the reason, leadership ineffectiveness can be costly for the board and organization that fails to address it. Cost can be seen in board and CEO turnover, level of engagement, job dissatisfaction, low social cohesion, poor board morale, lack of public trust, inability to innovate, etc. For a full discussion of the informal leadership that may be exerted by “core groups” within the board, see Chapter 8 on Board Culture. Treatment The results obtained from this section of the Board Check-Up are the most sensitive and potentially difficult to handle of any. Most people, when asked if they would like to have their job performance reviewed, tend to say they welcome feedback on how they are doing so they can use it to improve. In practice, however, many do not appreciate being what they see as “unfairly criticized” no matter how much effort is made to make it “constructive” criticism. Varying degrees of defensiveness and hostility are common reactions even though they may not be made obvious at the time. In one early experience with the Board Performance Self-Assessment instrument, for example, an instance arose in which a third of respondents indicated that they thought the board chair had difficulties running effective meetings. The board chair said nothing at the time but within a month quietly resigned, well before her term was up. One might argue that this was all for the best but this overlooks the possibility that, if these results had been handled differently, she might have reacted differently and, for example, obtained some training in meeting leadership. The following are suggestions for dealing with perceptions of leadership issues: • It is best if, before the results of the survey are revealed to anyone, the Survey Coordinator hold a discussion with both the CEO and the Chair about a hypothetical situation in which some board members report perceptions that these leaders are engaged in one or more of the problematic situations described in this section of the survey. This discussion should cover to whom these results should be communicated and how they should be interpreted. • It is recommended that, at first, results should be communicated only to the person involved, e.g., the Chair or the CEO. It should also be agreed that there are various possible reasons for such results as described above, which should be explored and that they do not necessarily mean that the board is dissatisfied with the leader. It could be that there is a situation that the board needs to address that negatively impacts the leadership of the chair or CEO. • Finally, it should be understood that all the behaviors described in this section could be addressed through additional training and/or coaching. It is quite possible that a different style of leadership is new to many leaders. Also, it is not always possible for Chairs or CEOs to find the time or opportunity to take leadership development courses. What is more possible, in many cases, is to find effective experienced leaders of boards who are willing to act as mentors or coaches to the Chair or CEO for short periods of time. Retaking the Board Check-Up after a period of such coaching can provide useful indicators of the extent to which leadership from these critical officers has improved and to what extent agreed upon goals have been reached. Leadership Development The preceding discussion addresses how to help key leaders such as the Board Chair and CEO develop their leadership competency but it does not get into detail about what these competencies should be. There is not the space here to get into this very large and complex topic, however, a few key points can be made. Perhaps the most important is that there is no “one best way” to lead in all situations. Different mixes of board member personalities and different external conditions call for differing approaches to leading. The key question the board should consider is how can nonprofit leadership be managed for higher board performance? The majority of the items in the list of board leadership issues above relate to the leadership competencies of effective chairs and CEOs. Other items relate to leadership influence and impact. For example, our research has shown that board chairs seen as exerting too little or too much influence in the role are also seen as having limited impact on the board, the CEO, the organization, and the support of external stakeholders. This research identified the following behaviors of highly effective chairs. Organized in clusters, they are: • Motivation and style (e.g., is helpful, has a sense of humor, is empowering, friendly and humble). • Capacity to lead (e.g., is committed to the organization, devoted in terms of time given to it, capable of seeing the big picture, able to handle contentious issues, and collaborative). • Personal attributes (e.g., is bright/intelligent, trustworthy, confident, thoughtful, organized, focused, and creative). • Ability to relate(e.g., is flexible, easy going, non-judgmental and calm). • Ability to advance the organization externally. (e.g., possesses connections and influence with key people and is willing to use them (see Harrison & Murray, 2012; Harrison, Murray, & Cornforth, 2013). Herman and Heimovics (2005) identified the following competencies of “board centered” CEOs. They are: • Facilitate interaction in board relationships; • Show consideration and respect toward board members; • Envision change and innovation for the organization with the board; • Provide useful and helpful information for the board; and • Promote board accomplishments and productivity. The National Learning Initiative (2003) also identified competencies of effective leadership volunteers saying they: • Are motivated to serve (e.g., recruited for the right reasons, empowered for the service of mission/others) • Create, shared vision, and align strategically (e.g., are informed, consider best practices, contribute to the development of. and commitment to. a shared vision that provides meaning and direction) • Develop effective relationships (e.g., nurture a healthy organization and work environment, are socially aware and maintain effective relationships) • Create value (e.g., open to innovation, creativity, and change; translate theories into action; are responsive and accountable) One of the better ways to design the kind of leadership that is best for your organization is that offered by the Competing Values Approach to leadership effectiveness (see Quinn et al., Becoming a Master Manager: A Competing Values Approach, 5th edition, for a description of the leadership competencies, and assessment tools that can be used to assess leadership effectiveness). The Competing Values Approach to assessing and developing leadership competency recognizes there are different values that underlie leadership styles (e.g., the tendency to focus on people, strategic goals, management processes, innovation and changes in the external environment etc.) and that these values can create tensions between leaders involved in the governance process. They have created a set of diagnostic criteria to assess leadership effectiveness and surface tensions in the leadership process. To develop leadership, they say leaders simply need enough information to adjust their behavior rather than to alter it altogether. This “balanced” approach to leadership development, which recognizes there are competing values and leadership styles, should reduce tensions and the tendency for organizations to swing from one ineffective leader to another. Regardless of the approach or tool used, leadership development is an opportunity for nonprofit boards to: • Assess leadership competency and isolate the contributions nonprofit leaders make to the board and organization through the governance process. • Discuss tensions that exist between leaders and groups in the governance process. Nonprofit leaders should also discuss how to develop leadership competency and overcome situations in cases where leadership effectiveness is challenged (e.g., crisis, board chair or CEO turnover, etc.). • Develop a focused plan for nonprofit leadership development that will be reviewed as part of the board performance assessment process. • Increase responsibility of leaders in the governance process (e.g., from board member with no committee responsibilities, to committee member, officer and ultimately, chair of the board). • Recognize leaders for their leadership contributions to the board and organization. Table 11 contains additional useful information and resources to increase the governance effectiveness of the organization through leadership. Table 11: Additional Board Leadership Resources Topic Country Source Website Leadership Development: General U.S.A. National Council of Nonporofits www.councilofnonprofits.org/r...pic/leadership The Bridgespan Group http://www.bridgespan.org/Publicatio...x#.U27eO4FdX84 Canada Ivey Business School, Western University iveybusinessjournal.com/topic...w#.U27f34FdX85 Leadership Assessment Tools U.S.A. Board Source www.boardsource.org/eweb/asae/default.html CEO Leadership Development U.S.A. Stanford Social Innovation Review http://www.ssireview.org/articles/en...ership_deficit Britain KnowHow NonProfit http://knowhownonprofit.org/leadersh...tive/framework
textbooks/biz/Management/Book%3A_Guidelines_for_Improving_the_Effectiveness_of_Boards_of_Directors_of_Nonprofit_Organization_(Murray_and_Harrison)/10%3A_Leadership_on_the_Board.txt
The purpose of this guidebook has been to a) help you understand some of the issues that challenge the effectiveness of nonprofit boards, b) offer some explanations as to why they exist, and c) provide guidance on how to manage them so as to improve the effectiveness of the governance function. The book, and the Board Check-Up research project of which it is a part, is derived from the idea of health checkups in medicine. The social science that underlies the research is that of the theory of organizational change. Simply put, by surfacing issues (symptoms) in the governance process, the stage is set for potential change in governance practices (treatment). However, as anyone involved in nonprofit organizations and governance knows, making change is easier said than done. In fact, our early research results that track the impact of Board Check-Up show that, while the majority of boards do report making changes in governance practices in each of the dimensions assessed in the Board Check-Up not all boards do so and some kinds of changes are made more often than others (e.g., issues related to board meetings are made more frequent than changes in board culture and leadership) (see Harrison and Murray, forthcoming). For this reason, we recommend boards take the Board Check-Up on a regular basis and use it as an opportunity to delve deeper into discussions of the symptoms and why they exist in the board (diagnoses), and what can be done about them (treatment). Results from our research of the change process show the Board Check-Up fills gaps in board leadership and technical capacity to self-assess performance (Harrison, 2014). In addition to providing a model, theory and online tool for deciding change, we’ve also provided links to additional resources that may be useful when deciding what practices need to change. While resources are organized by country, many provide useful guidance and tools that apply across countries. By no means do we provide an exhaustive review of the websites and literature on governance effectiveness in this book. Please consider additional sources and adopt those that seem to be a good fit for your board and organization. Where Do You Go from Here? The final section of these Guidelines is directed primarily at those who are using them to self-assess board performance as part of an organization registered to take the Board Check-Up at www.boardcheckup.com or who are part of a course on nonprofit governance of which the Board Check-Up is a learning activity. It describes ways in which the results of the Board Check-Up can be used to promote dialogue and decisions regarding needed changes in governance practices. The results of the Board Check-Up will give you some ideas about possible difficulties that could be keeping your board from performing at its best. How these results are used will determine how valuable they might be in helping to make changes that will make the governance function of your organization more effective. Here are some suggestions for getting the most from the self-assessment process. As a general rule, it is desirable to take action on the results of the questionnaire as quickly as possible after it is completed, while the process is still fresh in everyone’s minds. If possible, create a small “Board Self-Assessment Implementation Task Force” to take the lead in this final phase. Alternatively, an existing board committee such as a Governance or Executive Committee could take on this job. This committee should choose a chair—possibly the person who acted as Board Check-Up Coordinator. It should review the findings and discuss the best way to present them to the board as a whole. A special board meeting, or retreat, should be organized to review the findings. If possible, all those who were originally asked to participate should be invited, e.g., in addition to board members, ask top managers, senior volunteers, etc. The special board meeting or retreat should proceed as follows: If the group is large enough, consider breaking into smaller groups to discuss the following questions, otherwise pose them in a plenary format: What positive, future-oriented changes might be made to end the problems? For each of the top priority issues, why do they exist? (The meeting should be reminded that the reasons might not always be simple. For example, if there is strong agreement that board meetings are too long, this could be for many reasons: a failure to establish and enforce time limits for agenda items, board members being unprepared, poorly prepared committee reports, too much time spent on routine leaving important policy issues until late in the meeting, etc.) What are the issues that most need working on in terms of importance and immediacy? Discuss the significance of the results obtained in each of the topic areas covered in this Final Report. Results for each of the nine distinct elements of board effectiveness The 10 issues that might be the most challenging The 10 things we do best Total score Percentage of “Not Sure” Response rate The Chair of the meeting should begin by reviewing the reasons for engaging in this self-assessment exercise and go on to make the following points: When there is a strong consensus that certain issues are real problems it is important not to jump to conclusions about why they exist or what should be done about them. Instead, they should be carefully analyzed. We therefore recommend that this special board meeting not be used to make decisions but only to seek consensus on issues and identify possible solutions. The Task Force would promise to take this input and return later with well-thought-out formal recommendations for change, if needed. It is possible that some problems, on further discussion, will be found to be simply the result of lack of knowledge or experience on the part of some participants. These can be corrected by better communications. The discussion should not take the form of blaming anyone for any of the issues identified. The Implementation Task Force should take the input provided at the special board meeting and use it to prepare a series of recommendations for change along with supporting arguments for them. These would be brought to a formal meeting of the board for discussion and approval. Finally, responsibility for tracking the outcomes of these changes should be allocated to a person or committee who will report at the end of a year on the degree of improvement in the governance process. This should signal the beginning of a process of board self-evaluation that occurs every year. Continuing and long-lasting effectiveness in governance practices are best achieved if the board commits itself to assessing its performance on a regular and long-term basis. Here are three options for you follow to ensure this kind of long term success: 1. Be part of cutting-edge research: This guidebook is part of a larger research study of nonprofit board effectiveness. Participants gain access to free online tools and resources produced from the research on the state of nonprofit board effectiveness in nonprofit organizations around the world. If you have taken the Board Check-Up online (www.boardcheckup.com), then you are a participant in this research. If you haven’t, then consider registering for the University at Albany, SUNY sponsored research project online through the website or contact Professor Yvonne Harrison [email protected] for more information. 2. Take an interactive nonprofit governance course for free or credit: In January 2015, Professor Harrison opens her University at Albany, SUNY Nonprofit Governance course to the public as part of the Open SUNY strategy to increase access to education through online learning. Coursera’s online teaching platform hosts the course and interactive instructional strategies are incorporated to teach course concepts, which include main concepts in this and other nonprofit governance books. Through the course learning activities, participants receive guided instruction on board performance assessment. Along with faculty and specialized educational technology support, peer learning groups support and evaluate teaching and learning in the online environment. 3. Join a peer learning group to develop and help grow your board and the field of nonprofit leadership. Participants in the Board Check-Up research and Nonprofit Governance course will be invited to join various nonprofit leadership peer learning groups on topics of importance to participants. These groups will be facilitated by faculty, nonprofit leaders, and students in the University at Albany and SUNY Open community.
textbooks/biz/Management/Book%3A_Guidelines_for_Improving_the_Effectiveness_of_Boards_of_Directors_of_Nonprofit_Organization_(Murray_and_Harrison)/11%3A_Conclusions.txt
For the past dozen years, Vic Murray and Yvonne Harrison have worked collaboratively combining their knowledge and expertise to make research, education, and tools available to leaders in the nonprofit sector in need of them. Vic Murray, Ph.D. Vic Murray is currently Adjunct Professor in the School of Public Administration at the University of Victoria. From 1983 to 1995, he was director of the program in voluntary sector management in the Schulich School of Business at York University, Toronto. Dr. Murray specializes in the study of voluntary sector organizations of all types with particular emphasis on the areas of board governance, strategic planning, inter-organizational collaboration, and the assessment of organizational effectiveness. He is also an active consultant and volunteer in these areas. As Director of the Nonprofit Leadership and Management Program at York University he developed Canada’s first certificate and master’s level programs in that field. He is the author of many books, articles and papers in the fields of organizational behavior and nonprofit management. His most recent book is The Management of Nonprofit and Charitable Organizations in Canada (LexisNexis, 2009). Currently, he is a member of the Advisory Board for the journal Nonprofit Management and Leadership, and active in the Association for Research on Nonprofit Organizations and Voluntary Action (ARNOVA). In 2002 he was awarded ARNOVA’s Distinguished Lifetime Achievement Award. In 1995 the Canadian Centre for Philanthropy awarded him the Alan Arlett medal for distinguished contributions to philanthropy research. In 2005, he helped to found the Association for Nonprofit and Social Economy Research of Canada and, in 2013, was awarded its Distinguished Service Award. Dr. Murray’s current research interest is a longitudinal study of the impact of the self-assessment of governance performance in nonprofit organizations (see www.boardcheckup.com) with Dr. Yvonne Harrison of the State University of New York at Albany. Yvonne D. Harrison, Ph.D. Yvonne Harrison is Assistant Professor in the Department of Public Administration and Policy in Rockefeller College of Public Affairs and Policy, University at Albany, SUNY. Prior to joining the Rockefeller College faculty, Yvonne was Assistant Professor in the Center for Nonprofit and Social Enterprise Management at Seattle University, Washington where she conducted nonprofit leadership research and taught courses in nonprofit governance and information management in nonprofit and government organizations. Dr. Harrison has expertise in the governance and leadership of nonprofit organizations and the adoption and impact of information and communications technology (ICT) in nonprofit and voluntary sector organizations. Her current research examines questions about the effectiveness of nonprofit governing boards and the impact of online board performance self-assessment on nonprofit governance and organizational effectiveness. Funding for this research comes from the following sources: • Institute for Nonprofit Studies, Mount Royal University in Calgary, Alberta, Canada • University at Albany Faculty Research Award Programs (A and B) • Rockefeller College of Public Affairs and Policy, University at Albany, SUNY Currently, she is a member of the Association for Research on Nonprofit Organizations and Voluntary Action (ARNOVA) and Association for Nonprofit and Social Economy Research of Canada (ANSER). In 2002, Dr. Harrison was awarded (with John Langford), the J. E. Hodgetts Award for Best Article in Canadian Public Administration (CPA). She is the author of a number of other peer reviewed journal articles, book chapters, research reports, and publications. She holds a Bachelor of Science in Nursing, a Master of Public Administration and PhD in Public Administration from the University of Victoria, British Columbia, Canada. Reviewer’s Notes Review by Mike Flinton Dr. Vic Murray and Dr. Yvonne Harrison have created a truly unique “how-to manual” that surpasses that clichéd label and successfully developed a management and leadership tool designed to help nonprofit board members, their CEOs, and aspiring nonprofit professionals to lead in an effective and efficient manner that insures participation by all. This book is suitable for current board members and CEOs of nonprofit organizations in the U.S., Canada, or abroad, as well as graduate level faculty and students in the U.S. or Canada. Still others may find it helpful depending on the legal, social, and cultural environments that they and their nonprofit organizations operate in. Having worked as a team, and by engaging hundreds of veteran board members and their organizations, Murray and Harrison use what they refer to as a “health check-up” assessment model and methodology. Using this, they’ve created a paradigm shift that enables nonprofit leaders to identify and explore the “Symptoms,” “Diagnosis,” and “Treatment” of the illnesses most common to nonprofit organizations. Throughout the 11 chapters of this guidebook, the authors remain committed to the health check-up analogy and process, which enables those in the trenches of nonprofit organizations, as well as those in the classroom, to use the text as a highly functional analysis and remedy tool. Going well beyond a simple “how-to” mindset, the Symptoms, Diagnosis, and Treatment discussions on each topic are backed up with additional information accompanied by a plethora of .org, .com, .edu, and .gov web sites and print materials supporting what these two respected educators have to offer. This publication can serve either as a standalone textbook or a supporting tool to the online Board Check-Up, which the authors developed before writing the guidebook. Hence, www.boardcheckup.com and the textbook were wisely developed for a variety of purposes and audiences. Whether using it as an individual tool, or accompanying the self-assessment online through Board Check-Up, whether you are directly faced with the challenges of overseeing a nonprofit organization, responsible for teaching others “how to,” or seeking to someday be a nonprofit professional yourself, you would be wise to examine this guidebook. Mike Flinton has over 20 years experience as a not-for-profit and higher education professional. In addition to having served as the director of the Saratoga Automobile Museum in Saratoga Springs, NY he has enjoyed being a board member and leader in a variety of organizations ranging from the Executive Service Corps of the Tri-Cities (ESCOT) to the Underground Railroad History Project of the Capital Region among many others. Before retiring from SUNY, Mike taught not-for-profit administration and management at SUNY Oneonta’s nationally recognized Cooperstown Graduate Program in Museum Studies. He has also worked at four other SUNY campuses and mentored students from Skidmore College pursuing careers in the not-for-profit sector. He has advised and supported such widely recognized organizations as the Schenectady Museum (now called MiSci), Capital District Habitat for Humanity, Historical Albany Foundation, the World Awareness Children’s Museum in Glens Falls, and Wiawaka Holiday House, a women’s retreat center in Lake George. He is a regular guest lecturer at graduate level not-for-profit administration and management classes at UAlbany. Before becoming a museum professional and consultant, Mike had a successful career in the United States Air Force where he lived and worked in more than a dozen countries and became involved in diverse social and public services programs, as well as history, art and cultural organizations in the U.S. and abroad. Mike has an MS degree in Public Administration from Central Michigan University, an MA in History from University at Alabany SUNY, a BS in Business Management & Administration from University of Maryland’s European Division and a BS in Human Resource Management from the New School of Social Research in New York City. Review by Hélène Cameron Guidelines for Improving the Effectiveness of Boards of Directors of Nonprofit Organizations will interest those who care about the governance of NPOs, especially board members, managers, and students of nonprofit organizations. The authors, Dr. Vic Murray and Dr. Yvonne Harrison, are specialists in the study of voluntary sector organizations and their deep understanding of the subject matter shows. As a practitioner with many years of experience with and on boards of nonprofit organizations, I have lived much of what is described in these guidelines. Murray and Harrison’s comprehensive yet concise and accessible treatment of what makes boards tick is dead-on. They use an effective device patterned on the health check-up to link the “symptoms” of poor board performance with a “diagnosis” and “treatment” and recommend resources to consult for a deeper understanding and practical tools. It’s all in one place... and it is readable and credible. The guidebook mirrors Board Check-Up, an online self-assessment tool they designed to assist in improving board performance. Each chapter deals with one of the nine effectiveness challenges faced by the board: authority and responsibilities; role in planning, performance assessment, and fundraising; structure and operating procedures, including meetings; composition and development; informal culture; and finally, leadership. Whether used in conjunction with the online tool or not, the guidebook should prove useful in several ways: • as a framework for understanding the role, structure and operation of a board within a nonprofit organization • as the basis for orienting novice board members to the nature and scope of their new environment • in identifying the action that boards might take to improve performance and the resources and tools available to assist them • in setting priorities for corrective action, based on an understanding of the potential impact of the assessed area and the feasibility of the remedy. As the authors repeatedly counsel, boards have to do their own homework and find their own fit. This guidebook should help get the job done. Through employment and community service, Hélène Cameron has an extensive background in non-profit governance, primarily in the areas of education and health. She gained valuable experience as the former executive director of non-profit organizations and as a volunteer and director on several non-profit boards in British Columbia. As a consultant, she has assisted several societies in the governance and strategic renewal process. Acknowledgments A special thanks to Chancellor Nancy Zimpher of the State University of New York and her staff for creating a strategy and source of funding to situate and advance our work. The following grants and people behind them deserve acknowledgement: • 2014 Open SUNY Textbook Grant (Principal Investigators, Cyril Oberlander, Kate Pitcher, and Allison Brown); • 2014 Open SUNY Innovative Instructional Technology Grant (IITG) (SUNY Open Director, Lisa Stephens); and • 2014 University at Albany Online Teaching and Learning Grant (University at Albany Provost, Susan Phillips and Associate Provost for OTLG, Peter Shea). These grants will begin the process of increasing access to nonprofit management and leadership education particularly students preparing for careers in the nonprofit sector and those working in the sector who face educational barriers such as cost and time constraints. Finally, the web resources described in this book would not be possible without the research assistance and dedication of Sreyashi Chakravarty, a University at Albany, SUNY graduate student. References Aulgur, J. (2013). Nonprofit Board Members Self-Perception in the Role of Organizational Governance and The Balanced Scorecard, Dissertation. Retrieved online from: The University of Arkansas http://gateway.proquest.com/openurl?...pqdiss:3588504 Bradshaw, P., Murray, V. & Wolpin, J. (1992). Do nonprofit boards make a difference? An exploration among board structure, process and effectiveness. Nonprofit and Voluntary Sector Quarterly, 21(3), 227-249. Bradshaw, P., Fredette, C. & Sukornyk, L. (2009). A Call to Action: Diversity on Canadian Not-for-Profit Boards. Retrieved from http://www.yorku.ca/mediar/special/d...rtjune2009.pdf Brown, W. (2007). Board performance practices and competent board members: Implications for practice. Nonprofit Management and Leadership, 17(3), 301-317. Brudney, J. and Murray, V. (1998). Do intentional efforts really make boards work? Nonprofit Management and Leadership, 8, 333-348. Brudney, J. and Nobbie, P. (2002). Training the policy governance model in nonprofit boards. Nonprofit Management and Leadership, 12(4), 387-408. Cameron, Kim. (n.d.) The Competing Values Framework: An Introduction. Retrieved from competingvalues.com/competing...troduction.pdf Carver, J. (2006). Boards that make a difference. San Francisco: John Wiley & Sons. Chait, R. P., Ryan, W. P., & Taylor, B. E. (2005). Governance as leadership: Reframing the work of nonprofit boards. New Jersey: John Wiley and Sons. Cornforth, C. J. (2001). What makes boards effective? An examination of the relationships between board inputs, structures, processes, and effectiveness in nonprofit organizations. Corporate Governance: an International Review, 9(3), 217-227. Fredette, C. and Bradshaw, P. (2012). Social capital and nonprofit governance effectiveness, Nonprofit Management and Leadership, 22(4), 391-409. Gill, M. (2005). Governing for results: A Directors guide to good governance. Mississauga, Canada: Trafford. Green, J. C. & D. W. Griesinger. (1996). Board performance and organizational effectiveness in nonprofit social services organizations. Nonprofit Management and Leadership, 6(4): 381-402. Harrison, Y.D. (2014). What influences changes in governance behavior and practices? Results from a longitudinal study of the effects of online board performance assessment on nonprofit governance effectiveness, Paper presented at the annual conference of the Association for Research in Nonprofit Organizations and Voluntary Action (ARNOVA), November 20, 2014, Denver, CO. Harrison, Y. D. (2014). Optimizing the potential of Information and Communications Technology in nonprofit organizations. In K. Seel (Ed.), The management of nonprofit and charitable organizations (pp. 465-516). Toronto: LexisNexis. Harrison, Y. D. & Murray, V. (2012). Perspectives on the role and impact of chairs of nonprofit organization boards of directors: A grounded theory mixed-method study. Nonprofit Management and Leadership, 22(4), 411-438. Harrison, Y., Murray, V., & Cornforth, C. (2013). The role and impact of chairs of nonprofit boards. In C. Cornforth & W. Brown (Eds.), New perspectives on nonprofit governance. Routledge, UK. Herman, R.D., & Renz, D.O. (1998). Nonprofit organizational effectiveness: Contrasts between especially effective and less effective organizations. Nonprofit Management and Leadership, 9, 23-38. Herman, R.D. & Renz, D.O. (1999). Theses on nonprofit organizational effectiveness. Nonprofit and Voluntary Sector Quarterly, 28, 107-126. Herman, R., & Renz, D. (2008). Advancing nonprofit organizational effectiveness research and theory. Nonprofit Management and Leadership 18(4), 399-415. Herman, R., Renz, D. O., & Heimovics, R. D. (1997). Board practices and board effectiveness in local nonprofit organizations. Nonprofit Management and Leadership, 7, 373– 385. Herman, R. & Heimovics, D. (2005). Executive leadership. In D. Renz (Ed.), Jossey Bass handbook on nonprofit management and leadership. San Francisco: Jossey-Bass. Hodge, M.M. & Piccolo, R. F. (2011). Nonprofit board effectiveness, private philanthropy, and financial vulnerability. Public Administration Quarterly, 35(4), 520-550. Holland, T.P. & Jackson, D.K. (1998). Strengthening board performance: Findings and lessons from demonstration projects. Nonprofit Management and Leadership, 9, 121-134. Jackson, P. (2006). Nonprofit risk management and contingency planning. Hoboken, New Jersey: John Wiley and Sons. Kaplan, D. & Norton, D. (1996). The Balanced Score Card: Translating Strategy into Action, Boston: Harvard University Press Kooiman, J. (2003). Governing as governance. London: Sage. Millesen, J. (2004). Sherpa? Shepherd? Conductor? Circus Master? Board Chair. The Nonprofit Quarterly, 39-42. Miller, C. (2008). Truth or consequences: The implications of financial decisions. Nonprofit Quarterly. Retrieved from http://quarterly288.rssing.com/brows...322835&item=43 Miller, D. & Droge, C. (1986). Psychological and traditional determinants of structure. Administrative Science Quarterly, 31, 531-560. Murray (2010). Chapter Title. In D. Renz (Ed.), Jossey Bass handbook on nonprofit management and leadership. San Francisco: Jossey-Bass. Murray, V. (2014). Managing the governance function: Developing effective boards of directors. In K. Seel (Ed.), The management of nonprofit and charitable organizations in Canada. Toronto: LexisNexis. National Learning Initiative (2003). What do voluntary sector leaders do? A report on a Joint Project of The Coalition of National Voluntary Organizations and The Association of Canadian Community Colleges. Retrieved from http://www.vsi-isbc.org/eng/hr/pdf/nli_report.pdf Ostrower, F. (2007). Nonprofit Governance in the United States. Center on Nonprofits and Philanthropy, The Urban Institute. Quinn, R. & Rohrbaugh, J. A. (1981). A competing values approach to organizational effectiveness. Public Productivity Review, 122. Quinn, R. E. & Rohrbaugh, J. (1983). A spatial model of effectiveness criteria: Toward a competing values approach to organizational analysis. Management Science, 29, 363 377. Quinn, R. Faerman, S. Thompson, M. McGrath, & St. Clair, L. (2010). Becoming a master manager. San Francisco: John Wiley and Sons. Renz, David O. (2006). “Reframing Governance.” Nonprofit Quarterly, 13(4), 6-13. Renz, David O. (2012). “Reframing Governance II.” Nonprofit Quarterly, Special Governance Issue, accessed online at https://nonprofitquarterly.org/gover...ernance-2.html. Tompkins, J. (2005). Organization Theory and Public Management, Belmont: Thomson Wadsworth Publishers. Zaccaro, S. J. & Klimoski, R. J. (2001). The Nature of Organizational Leadership. San Francisco: Jossey Bass.
textbooks/biz/Management/Book%3A_Guidelines_for_Improving_the_Effectiveness_of_Boards_of_Directors_of_Nonprofit_Organization_(Murray_and_Harrison)/About_the_Authors.txt
Learning Objective After studying this section you should be able to do the following: 1. Appreciate how in the past decade, technology has helped bring about radical changes across industries and throughout societies. This book is written for a world that has changed radically in the past decade. At the start of the prior decade, Google barely existed and well-known strategists dismissed Internet advertising models (Porter, 2001). By decade’s end, Google brought in more advertising revenue than any firm, online or off, and had risen to become the most profitable media company on the planet. Today billions in advertising dollars flee old media and are pouring into digital efforts, and this shift is reshaping industries and redefining skills needed to reach today’s consumers. A decade ago the iPod also didn’t exist and Apple was widely considered a tech-industry has-been. By spring 2010 Apple had grown to be the most valuable tech firm in the United States, selling more music and generating more profits from mobile device sales than any firm in the world. Moore’s Law and other factors that make technology faster and cheaper have thrust computing and telecommunications into the hands of billions in ways that are both empowering the poor and poisoning the planet. Social media barely warranted a mention a decade ago, but today, Facebook’s user base is larger than any nation, save for China and India. Firms are harnessing social media for new product ideas and for millions in sales. But with promise comes peril. When mobile phones are cameras just a short hop from YouTube, Flickr, and Twitter, every ethical lapse can be captured, every customer service flaw graffiti-tagged on the permanent record that is the Internet. The service and ethics bar for today’s manager has never been higher. Speaking of globalization, China started the prior decade largely as a nation unplugged and offline. But today China has more Internet users than any other country and has spectacularly launched several publicly traded Internet firms including Baidu, Tencent, and Alibaba. By 2009, China Mobile was more valuable than any firm in the United States except for Exxon Mobil and Wal-Mart. Think the United States holds the number one ranking in home broadband access? Not even close—the United States is ranked fifteenth (Shankland, 2010). The way we conceive of software and the software industry is also changing radically. IBM, HP, and Oracle are among the firms that collectively pay thousands of programmers to write code that is then given away for free. Today, open source software powers most of the Web sites you visit. And the rise of open source has rewritten the revenue models for the computing industry and lowered computing costs for start-ups to blue chips worldwide. Cloud computing and software as a service is turning sophisticated, high-powered computing into a utility available to even the smallest businesses and nonprofits. Data analytics and business intelligence are driving discovery and innovation, redefining modern marketing, and creating a shifting knife-edge of privacy concerns that can shred corporate reputations if mishandled. And the pervasiveness of computing has created a set of security and espionage threats unimaginable to the prior generation. As the last ten years have shown, tech creates both treasure and tumult. These disruptions aren’t going away and will almost certainly accelerate, impacting organizations, careers, and job functions throughout your lifetime. It’s time to place tech at the center of the managerial playbook. Key Takeaways • In the prior decade, firms like Google and Facebook have created profound shifts in the way firms advertise and individuals and organizations communicate. • New technologies have fueled globalization, redefined our concepts of software and computing, crushed costs, fueled data-driven decision making, and raised privacy and security concerns. Questions and Exercises 1. Visit a finance Web site such as http://www.google.com/finance. Compare Google’s profits to those of other major media companies. How have Google’s profits changed over the past few years? Why have the profits changed? How do these compare with changes in the firm you chose? 2. How is social media impacting firms, individuals, and society? 3. How do recent changes in computing impact consumers? Are these changes good or bad? Explain. How do they impact businesses? 4. What kinds of skills do today’s managers need that weren’t required a decade ago? 5. Work with your instructor to decide ways in which your class can use social media. For example, you might create a Facebook group where you can share ideas with your classmates, join Twitter and create a hash tag for your class, or create a course wiki. 1.02: Its Your Revolution Learning Objective After studying this section you should be able to do the following: 1. Name firms across hardware, software, and Internet businesses that were founded by people in their twenties (or younger). The intersection where technology and business meet is both terrifying and exhilarating. But if you’re under the age of thirty, realize that this is your space. While the fortunes of any individual or firm rise and fall over time, it’s abundantly clear that many of the world’s most successful technology firms—organizations that have had tremendous impact on consumers and businesses across industries—were created by young people. Consider just a few: Bill Gates was an undergraduate when he left college to found Microsoft—a firm that would eventually become the world’s largest software firm and catapult Gates to the top of the Forbes list of world’s wealthiest people (enabling him to also become the most generous philanthropist of our time). Figure 1.1 Young Bill Gates appears in a mug shot for a New Mexico traffic violation. Microsoft, now headquartered in Washington State, had its roots in New Mexico when Gates and partner Paul Allen moved there to be near early PC maker Altair. Albuquerque, New Mexico police department – Bill Gates mugshot – public domain.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/01%3A_Setting_the_Stage-_Technology_and_the_Modern_Enterprise/1.01%3A_Techs_Tectonic_Shift-_Radically_Changing_Business_Landscapes.txt
Learning Objectives After studying this section you should be able to do the following: 1. Appreciate the degree to which technology has permeated every management discipline. 2. See that tech careers are varied, richly rewarding, and poised for continued growth. Shortly after the start of the prior decade, there was a lot of concern that tech jobs would be outsourced, leading many to conclude that tech skills carried less value and that workers with tech backgrounds had little to offer. Turns out this thinking was stunningly wrong. Tech jobs boomed, and as technology pervades all other management disciplines, tech skills are becoming more important, not less. Today, tech knowledge can be a key differentiator for the job seeker. It’s the worker without tech skills that needs to be concerned. As we’ll present in depth in a future chapter, there’s a principle called Moore’s Law that’s behind fast, cheap computing. And as computing gets both faster and cheaper, it gets “baked into” all sorts of products and shows up everywhere: in your pocket, in your vacuum, and on the radio frequency identification (RFID) tags that track your luggage at the airport. Well, there’s also a sort of Moore’s Law corollary that’s taking place with people, too. As technology becomes faster and cheaper and developments like open source software, cloud computing, software as a service (SaaS), and outsourcing push technology costs even lower, tech skills are being embedded inside more and more job functions. What this means is that even if you’re not expecting to become the next Tech Titan, your career will doubtless be shaped by the forces of technology. Make no mistake about it—there isn’t a single modern managerial discipline that isn’t being deeply and profoundly impacted by tech. • Finance Many business school students who study finance aspire to careers in investment banking. Many i-bankers will work on IPOs, or initial public stock offerings, in effect helping value companies the first time these firms wish to sell their stock on the public markets. IPO markets need new firms, and the tech industry is a fertile ground that continually sprouts new businesses like no other. Other i-bankers will be involved in valuing merger and acquisition (M&A) deals, and tech firms are active in this space, too. Leading tech firms are flush with cash and constantly on the hunt for new firms to acquire. Cisco bought forty-eight firms in the prior decade; Oracle bought five firms in 2009 alone. And even in nontech industries, technology impacts nearly every endeavor as an opportunity catalyst or a disruptive wealth destroyer. The aspiring investment banker who doesn’t understand the role of technology in firms and industries can’t possibly provide an accurate guess at how much a company is worth. Table 1.1 Top Acquirers of VC-Backed Companies 2000–2009 Acquiring Company Acquisitions Cisco 48 IBM 35 Microsoft 30 EMC Corporation 25 Oracle Corp. 23 Broadcom 18 Symantec 18 Hewlett-Packard 18 Google 17 Sun Microsystems 16 Source: VentureSource. Those in other finance careers will be lending to tech firms and evaluating the role of technology in firms in an investment portfolio. Most of you will want to consider tech’s role as part of your personal investments. And modern finance simply wouldn’t exist without tech. When someone arranges for a bridge to be built in Shanghai, those funds aren’t carried over in a suitcase—they’re digitally transferred from bank to bank. And forces of technology blasted open the two-hundred-year-old floor trading mechanism of the New York Stock Exchange, in effect forcing the NYSE to sell shares in itself to finance the acquisition of technology-based trading platforms that were threatening to replace it. As another example of the importance of tech in finance, consider that Boston-based Fidelity Investments, one of the nation’s largest mutual fund firms, spends roughly \$2.8 billion a year on technology. Tech isn’t a commodity for finance—it’s the discipline’s lifeblood. • Accounting If you’re an accountant, your career is built on a foundation of technology. The numbers used by accountants are all recorded, stored, and reported by information systems, and the reliability of any audit is inherently tied to the reliability of the underlying technology. Increased regulation, such as the heavy executive penalties tied to the Sarbanes-Oxley Act in the United States, have ratcheted up the importance of making sure accountants (and executives) get their numbers right. Negligence could mean jail time. This means the link between accounting and tech have never been tighter, and the stakes for ensuring systems accuracy have never been higher. Business students might also consider that while accounting firms regularly rank near the top of BusinessWeek’s “Best Places to Start Your Career” list, many of the careers at these firms are highly tech-centric. Every major accounting firm has spawned a tech-focused consulting practice, and in many cases, these firms have grown to be larger than the accounting services functions from which they sprang. Today, Deloitte’s tech-centric consulting division is larger than the firm’s audit, tax, and risk practices. At the time of its spin-off, Accenture was larger than the accounting practice at former parent Arthur Andersen (Accenture executives are also grateful they split before Andersen’s collapse in the wake of the prior decade’s accounting scandals). Now, many accounting firms that had previously spun off technology practices are once again building up these functions, finding strong similarities between the skills of an auditor and skills needed in emerging disciplines such as information security and privacy. • Marketing Technology has thrown a grenade onto the marketing landscape, and as a result, the skill set needed by today’s marketers is radically different from what was leveraged by the prior generation. Online channels have provided a way to track and monitor consumer activities, and firms are leveraging this insight to understand how to get the right product to the right customer, through the right channel, with the right message, at the right price, at the right time. The success or failure of a campaign can often be immediately assessed base on online activity such as Web site visit patterns and whether a campaign results in an online purchase. The ability to track customers, analyze campaign results, and modify tactics has amped up the return on investment of marketing dollars, with firms increasingly shifting spending from tough-to-track media such as print, radio, and television to the Web (Pontin 2009). And new channels continue to emerge. Firms as diverse as Southwest Airlines, Starbucks, UPS, and Zara have introduced apps for the iPhone and iPod touch. In less than four years, the iPhone has emerged as a channel capable of reaching over 75 million consumers, delivering location-based messages and services, and even allowing for cashless payment. The rise of social media is also part of this blown-apart marketing landscape. Now all customers can leverage an enduring and permanent voice, capable of broadcasting word-of-mouth influence in ways that can benefit and harm a firm. Savvy firms are using social media to generate sales, improve their reputations, better serve customers, and innovate. Those who don’t understand this landscape risk being embarrassed, blindsided, and out of touch with their customers. Search engine marketing (SEM), search engine optimization (SEO), customer relationship management (CRM), personalization systems, and a sensitivity to managing the delicate balance between gathering and leveraging data and respecting consumer privacy are all central components of the new marketing toolkit. And there’s no looking back—tech’s role in marketing will only grow in prominence. • Operations A firm’s operations management function is focused on producing goods and services, and operations students usually get the point that tech is the key to their future. Quality programs, process redesign, supply chain management, factory automation, and service operations are all tech-centric. These points are underscored in this book as we introduce several examples of how firms have designed fundamentally different ways of conducting business (and even entirely different industries), where value and competitive advantage are created through technology-enabled operations. • Human Resources Technology helps firms harness the untapped power of employees. Knowledge management systems are morphing into social media technologies—social networks, wikis, and Twitter-style messaging systems that can accelerate the ability of a firm to quickly organize and leverage teams of experts. Human resources (HR) directors are using technology for employee training, screening, and evaluation. The accessibility of end-user technology means that every employee can reach the public, creating an imperative for firms to set policy on issues such as firm representation and disclosure and to continually monitor and enforce policies as well as capture and push out best practices. The successful HR manager recognizes that technology continually changes an organization’s required skill sets, as well as employee expectations. The hiring and retention practices of the prior generation are also in flux. Recruiting hasn’t just moved online; it’s now grounded in information systems that scour databases for specific skill sets, allowing recruiters to cast a wider talent net than ever before. Job seekers are writing résumés with keywords in mind, aware that the first cut is likely made by a database search program, not a human being. The rise of professional social networks also puts added pressure on employee satisfaction and retention. Prior HR managers fiercely guarded employee directories for fear that a headhunter or competitive firm might raid top talent. Now the equivalent of a corporate directory can be easily pulled up via LinkedIn, a service complete with discrete messaging capabilities that can allow competitors to rifle-scope target your firm’s best and brightest. Thanks to technology, the firm that can’t keep employees happy, engaged, and feeling valued has never been more vulnerable. • The Law And for those looking for careers in corporate law, many of the hottest areas involve technology. Intellectual property, patents, piracy, and privacy are all areas where activity has escalated dramatically in recent years. The number of U.S. patent applications waiting approval has tripled in the past decade, while China saw a threefold increase in patent applications in just five years (Schmid & Poston, 2009). Firms planning to leverage new inventions and business methods need legal teams with the skills to sleuth out whether a firm can legally do what it plans to. Others will need legal expertise to help them protect proprietary methods and content, as well as to help enforce claims in the home country and abroad. • Information Systems Careers While the job market goes through ebbs and flows, recent surveys have shown there to be more IT openings than in any field except health care1. Money magazine ranked tech jobs as two of the top five “Best Jobs in America.”2BusinessWeek ranks consulting (which heavily hires tech grads) and technology as the second and third highest paying industries for recent college graduates (Gerdes, 2008). Technology careers have actually ranked among the safest careers to have during the most recent downturn (Kaneshige, 2009). And Fortune’s ranks of the “Best Companies to Work For” is full of technology firms and has been topped by a tech business for four years straight3. Students studying technology can leverage skills in ways that range from the highly technical to those that emphasize a tech-centric use of other skills. Opportunities for programmers abound, particularly for those versed in new technologies, but there are also roles for experts in areas such as user-interface design (who work to make sure systems are easy to use), process design (who leverage technology to make firms more efficient), and strategy (who specialize in technology for competitive advantage). Nearly every large organization has its own information systems department. That group not only ensures that systems get built and keep running but also increasingly takes on strategic roles targeted at proposing solutions for how technology can give the firm a competitive edge. Career paths allow for developing expertise in a particular technology (e.g., business intelligence analyst, database administrator, social media manager), while project management careers leverage skills in taking projects from conception through deployment. Even in consulting firms, careers range from hard-core programmers who “build stuff” to analysts who do no programming but might work identifying problems and developing a solutions blueprint that is then turned over to another team to code. Careers at tech giants like Apple, Google, and Microsoft don’t all involve coding end-user programs either. Each of these firms has their own client-facing staff that works with customers and partners to implement solutions. Field engineers at these firms may work as part of a sales team to show how a given company’s software and services can be used. These engineers often put together prototypes that are then turned over to a client’s in-house staff for further development. An Apple field engineer might show how a firm can leverage podcasting in its organization, while a Google field engineer can help a firm incorporate search, banner, and video ads into its online efforts. Careers that involve consulting and field engineering are often particularly attractive for those who enjoy working with an ever-changing list of clients and problems across various industries and in many different geographies. Upper-level career opportunities are also increasingly diverse. Consultants can become partners who work with the most senior executives of client firms, helping identify opportunities for those organizations to become more effective. Within a firm, technology specialists can rise to be chief information officer or chief technology officer—positions focused on overseeing a firm’s information systems development and deployment. And many firms are developing so-called C-level specialties in emerging areas with a technology focus, such as chief information security officer (CISO), and chief privacy officer (CPO). Senior technology positions may also be a ticket to the chief executive’s suite. A recent Fortune article pointed out how the prominence of technology provides a training ground for executives to learn the breadth and depth of a firm’s operations and an understanding of the ways in which firms are vulnerable to attack and where it can leverage opportunities for growth (Fort, 2009). • Your Future With tech at the center of so much change, realize that you may very well be preparing for careers that don’t yet exist. But by studying the intersection of business and technology today, you develop a base to build upon and critical thinking skills that will help evaluate new, emerging technologies. Think you can afford to wait on tech study, then quickly get up to speed? Think about it. Whom do you expect to have an easier time adapting and leveraging a technology like social media—today’s college students who are immersed in technology or their parents who are embarrassingly dipping their toes into the waters of Facebook? Those who put off an understanding of technology risk being left in the dust. Consider the nontechnologists who have tried to enter the technology space these past few years. Newscorp head Rupert Murdoch piloted his firm to the purchase of MySpace only to see this one-time leader lose share to rivals (Malik, 2010). Former Warner executive Terry Semel presided over Yahoo!’s malaise as Google blasted past it (Thaw, 2007). Barry Diller, the man widely credited with creating the Fox Network, led InterActive Corp (IAC) in the acquisition of a slew of tech firms ranging from Expedia to Ask.com, only to break the empire up as it foundered. And Time Warner head Jerry Levin presided over the acquisition of AOL, executing what many consider to be one of the most disastrous mergers in U.S. business history (Quinn, 2009). Contrast these guys against the technology-centric successes of Mark Zuckerberg (Facebook), Steve Jobs (Apple), and Sergey Brin and Larry Page (Google). While we’ll make it abundantly clear that a focus solely on technology is a recipe for disaster, a business perspective that lacks an appreciation for tech’s role is also likely to be doomed. At this point in history, technology and business are inexorably linked, and those not trained to evaluate and make decisions in this ever-shifting space risk irrelevance, marginalization, and failure. Key Takeaways • As technology becomes cheaper and more powerful, it pervades more industries and is becoming increasingly baked into what were once nontech functional areas. • Technology is impacting every major business discipline, including finance, accounting, marketing, operations, human resources, and the law. • Tech jobs rank among the best and highest-growth positions, and tech firms rank among the best and highest-paying firms to work for. • Information systems (IS) jobs are profoundly diverse, ranging from those that require heavy programming skills to those that are focused on design, process, project management, privacy, and strategy. Questions and Exercises 1. Look at Fortune’s “Best Companies to Work For” list. How many of these firms are technology firms? Which firm would you like to work for? Are they represented on this list? 2. Look at BusinessWeek’s “Best Places to Start Your Career” list. Is the firm you mentioned above also on this list? 3. What are you considering studying? What are your short-term and long-term job goals? What role will technology play in that career path? What should you be doing to ensure that you have the skills needed to compete? 4. Which jobs that exist today likely won’t exist at the start of the next decade? Based on your best guess on how technology will develop, can you think of jobs and skill sets that will likely emerge as critical five and ten years from now?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/01%3A_Setting_the_Stage-_Technology_and_the_Modern_Enterprise/1.03%3A_Geek_UpTech_Is_Everywhere_and_Youll_Need_It_to_Thrive.txt
Learning Objective After studying this section you should be able to do the following: 1. Understand the structure of this text, the issues and examples that will be introduced, and why they are important. Hopefully this first chapter has helped get you excited for what’s to come. The text is written in a style meant to be as engaging as the material you’ll be reading for the rest of your management career—articles in business magazines and newspapers. The introduction of concepts in this text are also example rich, and every concept introduced or technology discussed is always grounded in a real-world example to show why it’s important. But also know that while we celebrate successes and expose failures in that space where business and technology come together, we also recognize that firms and circumstances change. Today’s winners have no guarantee of sustained dominance. What you should acquire in the pages that follow are a fourfold set of benefits that (1) provide a description of what’s happening in industry today, (2) offer an introduction to key business and technology concepts, (3) offer a durable set of concepts and frameworks that can be applied even as technologies and industries change, and (4) develop critical thinking that will serve you well throughout your career as a manager. Chapters don’t have to be read in order, so feel free to bounce around, if you’d like. But here’s what you can expect: Chapter 2 “Strategy and Technology: Concepts and Frameworks for Understanding What Separates Winners from Losers” focuses on building big-picture skills to think about how to leverage technology for competitive advantage. Technology alone is rarely the answer, but through a rich set of examples, we’ll show how firms can weave technology into their operations in ways that create and reinforce resources that can garner profits while repelling competitors. A mini case examines tech’s role at FreshDirect, a firm that has defied the many failures in the online grocery space and devastated traditional rivals. BlueNile, Dell, Lands’ End, TiVo and Yahoo! are among the many firms providing a rich set of examples illustrating successes and failures in leveraging technology. The chapter will show how firms use technology to create and leverage brand, scale economies, switching costs, data assets, network effects, and distribution channels. We’ll introduce how technology relates to two popular management frameworks—the value chain and the five forces model. And we’ll provide a solid decision framework for considering the controversial and often misunderstood role that technology plays among firms that seek an early-mover advantage. In Chapter 3 “Zara: Fast Fashion from Savvy Systems”, we see how a tech-fed value chain helped Spanish clothing giant Zara craft a counterintuitive model that seems to defy all conventional wisdom in the fashion industry. We’ll show how Zara’s model differs radically from that of the firm it displaced to become the world’s top clothing retailer: Gap. We’ll see how technology impacts product design, product development, marketing, cycle time, inventory management, and customer loyalty and how technology decisions influence broad profitability that goes way beyond the cost-of-goods thinking common among many retailers. We’ll also offer a mini case on Fair Factories Clearinghouse, an effort highlighting the positive role of technology in improving ethical business practices. Another mini case shows the difference between thinking about technology versus broad thinking about systems, all through an examination of how high-end fashion house Prada failed to roll out technology that on the surface seemed very similar to Zara’s. Chapter 4 “Netflix: The Making of an E-commerce Giant and the Uncertain Future of Atoms to Bits” tramples the notion that dot-com start-up firms can’t compete against large, established rivals. We’ll show how information systems at Netflix created a set of assets that grew in strength and remains difficult for rivals to match. The economics of pure-play versus brick-and-mortar firms is examined, and we’ll introduce managerial thinking on various concepts such as the data asset, personalization systems (recommendation engines and collaborative filtering), the long tail and the implications of technology on selection and inventory, crowdsourcing, using technology for novel revenue models (subscription and revenue-sharing with suppliers), forecasting, and inventory management. The case ends with a discussion of Netflix’s uncertain future, where we present how the shift from atoms (physical discs) to bits (streaming and downloads) creates additional challenges. Issues of licensing and partnerships, revenue models, and delivery platforms are all discussed. Chapter 5 “Moore’s Law: Fast, Cheap Computing and What It Means for the Manager” focuses on understanding the implications of technology change for firms and society. The chapter offers accessible definitions for technologies impacted by Moore’s Law, but goes beyond semiconductors and silicon to show how the rate of magnetic storage (e.g., hard drives) and networking create markets filled with uncertainty and opportunity. The chapter will show how tech has enabled the rise of Apple and Amazon, created mobile phone markets that empower the poor worldwide, and has created five waves of disruptive innovation over five decades. We’ll also show how Moore’s Law, perhaps the greatest economic gravy train in history, will inevitably run out of steam as the three demons of heat, power, and limits on shrinking transistors halt the advancement of current technology. Studying technologies that “extend” Moore’s Law, such as multicore semiconductors, helps illustrate both the benefit and limitation of technology options, and in doing so, helps develop skills around recognizing the pros and cons of a given innovation. Supercomputing, grid, and cloud computing are introduced through examples that show how these advances are changing the economics of computing and creating new opportunity. Finally, issues of e-waste are explored in a way that shows that firms not only need to consider the ethics of product sourcing, but also the ethics of disposal. In Chapter 6 “Understanding Network Effects”, we’ll see how technologies, services, and platforms can create nearly insurmountable advantages. Tech firms from Facebook to Intel to Microsoft are dominant because of network effects—the idea that some products and services get more valuable as more people use them. Studying network effects creates better decision makers. The concept is at the heart of technology standards and platform competition, and understanding network effects can help managers choose technologies that are likely to win, hopefully avoiding getting caught with a failed, poorly supported system. Students learn how network effects work and why they’re difficult to unseat. The chapter ends with an example-rich discussion of various techniques that one can use to compete in markets where network effects are present. Chapter 7 “Peer Production, Social Media, and Web 2.0” explores business issues behind several services that have grown to become some of the Internet’s most popular destinations. Peer production and social media are enabling new services and empowering the voice of the customer as never before. In this chapter, students learn about various technologies used in social media and peer production, including blogs, wikis, social networking, Twitter, and more. Prediction markets and crowdsourcing are introduced, along with examples of how firms are leveraging these concepts for insight and innovation. Finally, students are offered guidance on how firms can think SMART by creating a social media awareness and response team. Issues of training, policy, and response are introduced, and technologies for monitoring and managing online reputations are discussed. Chapter 8 “Facebook: Building a Business from the Social Graph” will allow us to study success and failure in IS design and deployment by examining one of the Web’s hottest firms. Facebook is one of the most accessible and relevant Internet firms to so many, but it’s also a wonderful laboratory to discuss critical managerial concepts. The founding story of Facebook introduces concepts of venture capital, the board of directors, and the role of network effects in entrepreneurial control. Feeds show how information, content, and applications can spread virally, but also introduce privacy concerns. Facebook’s strength in switching costs demonstrates how it has been able to envelop additional markets from photos to chat to video and more. The failure of the Beacon system shows how even bright technologists can fail if they ignore the broader procedural and user implications of an information systems rollout. Social networking advertising is contrasted with search, and the perils of advertising alongside social media content are introduced. Issues of predictors and privacy are covered. And the case allows for a broader discussion on firm value and what Facebook might really be worth. Chapter 9 “Understanding Software: A Primer for Managers” offers a primer to help managers better understand what software is all about. The chapter offers a brief introduction to software technologies. Students learn about operating systems, application software, and how these relate to each other. Enterprise applications are introduced, and the alphabet soup of these systems (e.g., ERP, CRM, and SCM) is accessibly explained. Various forms of distributed systems (client-server, Web services, messaging) are also covered. The chapter provides a managerial overview of how software is developed, offers insight into the importance of Java and scripting languages, and explains the differences between compiled and interpreted systems. System failures, total cost of ownership, and project risk mitigation are also introduced. The array of concepts covered helps a manager understand the bigger picture and should provide an underlying appreciation for how systems work that will serve even as technologies change and new technologies are introduced. The software industry is changing radically, and that’s the focus of Chapter 10 “Software in Flux: Partly Cloudy and Sometimes Free”. The issues covered in this chapter are front and center for any firm making technology decisions. We’ll cover open source software, software as a service, hardware clouds, and virtualization. Each topic is introduced by discussing advantages, risks, business models, and examples of their effective use. The chapter ends by introducing issues that a manager must consider when making decisions as to whether to purchase technology, contract or outsource an effort, or develop an effort in-house. In Chapter 11 “The Data Asset: Databases, Business Intelligence, and Competitive Advantage”, we’ll study data, which is often an organization’s most critical asset. Data lies at the heart of every major discipline, including marketing, accounting, finance, operations, forecasting and planning. We’ll help managers understand how data is created, organized, and effectively used. We’ll cover limitations in data sourcing, issues in privacy and regulation, and tools for access including various business intelligence technologies. A mini case on Wal-Mart shows data’s use in empowering a firm’s entire value chain, while the mini case on Harrah’s shows how data-driven customer relationship management is at the center of creating an industry giant. Chapter 12 “A Manager’s Guide to the Internet and Telecommunications” unmasks the mystery of the Internet—it shows how the Internet works and why a manager should care about IP addresses, IP networking, the DNS, peering, and packet versus circuit switching. We’ll also cover last-mile technologies and the various strengths and weaknesses of getting a faster Internet to a larger population. The revolution in mobile technologies and the impact on business will also be presented. Chapter 13 “Information Security: Barbarians at the Gateway (and Just About Everywhere Else)” helps managers understand attacks and vulnerabilities and how to keep end users and organizations more secure. Breaches at TJX and Heartland and the increasing vulnerability of end-user systems have highlighted how information security is now the concern of the entire organization, from senior executives to front-life staff. This chapter explains what’s happening with respect to information security—what kinds of attacks are occurring, who is doing them, and what their motivation is. We’ll uncover the source of vulnerabilities in systems: human, procedural, and technical. Hacking concepts such as botnets, malware, phishing, and SQL injection are explained using plain, accessible language. Also presented are techniques to improve information security both as an end user and within an organization. The combination of current issues and their relation to a broader framework for security should help you think about vulnerabilities even as technologies and exploits change over time. Chapter 14 “Google: Search, Online Advertising, and Beyond” discusses one of the most influential and far-reaching firms in today’s business environment. As pointed out earlier, a decade ago Google barely existed, but it now earns more ad revenue and is a more profitable media company than any firm, online or off. Google is a major force in modern marketing, research, and entertainment. In this chapter you’ll learn how Google (and Web search in general) works. Issues of search engine ranking, optimization, and search infrastructure are introduced. Students gain an understanding of search advertising and other advertising techniques, ad revenue models such as CPM and CPC, online advertising networks, various methods of customer profiling (e.g., IP addresses, geotargeting, cookies), click fraud, fraud prevention, and issues related to privacy and regulation. The chapter concludes with a broad discussion of how Google is evolving (e.g., Android, Chrome, Apps, YouTube) and how this evolution is bringing it into conflict with several well-funded rivals, including Amazon, Apple, Microsoft, and more. Nearly every industry and every functional area is increasing its investment in and reliance on information technology. With opportunity comes trade-offs: research has shown that a high level of IT investment is associated with a more frenzied competitive environment (Brynjolfsson, et. al., 2008). But while the future is uncertain, we don’t have the luxury to put on the brakes or dial back the clock—tech’s impact is here to stay. Those firms that emerge as winners will treat IT efforts “as opportunities to define and deploy new ways of working, rather than just projects to install, configure, or integrate systems” (McAfee & Brynjolfsson, 2007). The examples, concepts, and frameworks in the pages that follow will help you build the tools and decision-making prowess needed for victory. Key Takeaways • This text contains a series of chapters and cases that expose durable concepts, technologies, and frameworks, and does so using cutting-edge examples of what’s happening in industry today. • While firms and technologies will change, and success at any given point in time is no guarantee of future victory, the issues illustrated and concepts acquired should help shape a manager’s decision making in a way that will endure. Questions and Exercises 1. Which firms do you most admire today? How do these firms use technology? Do you think technology gives them an advantage over rivals? Why or why not? 2. What areas covered in this book are most exciting? Most intimidating? Which do you think will be most useful?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/01%3A_Setting_the_Stage-_Technology_and_the_Modern_Enterprise/1.04%3A_The_Pages_Ahead.txt
Learning Objectives After studying this section you should be able to do the following: 1. Define operational effectiveness and understand the limitations of technology-based competition leveraging this principle. 2. Define strategic positioning and the importance of grounding competitive advantage in this concept. 3. Understand the resource-based view of competitive advantage. 4. List the four characteristics of a resource that might possibly yield sustainable competitive advantage. Managers are confused, and for good reason. Management theorists, consultants, and practitioners often vehemently disagree on how firms should craft tech-enabled strategy, and many widely read articles contradict one another. Headlines such as “Move First or Die” compete with “The First-Mover Disadvantage.” A leading former CEO advises, “destroy your business,” while others suggest firms focus on their “core competency” and “return to basics.” The pages of the Harvard Business Review declare, “IT Doesn’t Matter,” while a New York Times bestseller hails technology as the “steroids” of modern business. Theorists claiming to have mastered the secrets of strategic management are contentious and confusing. But as a manager, the ability to size up a firm’s strategic position and understand its likelihood of sustainability is one of the most valuable and yet most difficult skills to master. Layer on thinking about technology—a key enabler to nearly every modern business strategy, but also a function often thought of as easily “outsourced”—and it’s no wonder that so many firms struggle at the intersection where strategy and technology meet. The business landscape is littered with the corpses of firms killed by managers who guessed wrong. Developing strong strategic thinking skills is a career-long pursuit—a subject that can occupy tomes of text, a roster of courses, and a lifetime of seminars. While this chapter can’t address the breadth of strategic thought, it is meant as a primer on developing the skills for strategic thinking about technology. A manager that understands issues presented in this chapter should be able to see through seemingly conflicting assertions about best practices more clearly; be better prepared to recognize opportunities and risks; and be more adept at successfully brainstorming new, tech-centric approaches to markets. 2.02: Powerful Resources Learning Objectives After studying this section you should be able to do the following: 1. Understand that technology is often critical to enabling competitive advantage, and provide examples of firms that have used technology to organize for sustained competitive advantage. 2. Understand the value chain concept and be able to examine and compare how various firms organize to bring products and services to market. 3. Recognize the role technology can play in crafting an imitation-resistant value chain, as well as when technology choice may render potentially strategic assets less effective. 4. Define the following concepts: brand, scale, data and switching cost assets, differentiation, network effects, and distribution channels. 5. Understand and provide examples of how technology can be used to create or strengthen the resources mentioned above. Management has no magic bullets. There is no exhaustive list of key resources that firms can look to in order to build a sustainable business. And recognizing a resource doesn’t mean a firm will be able to acquire it or exploit it forever. But being aware of major sources of competitive advantage can help managers recognize an organization’s opportunities and vulnerabilities, and can help them brainstorm winning strategies. And these assets rarely exist in isolation. Oftentimes, a firm with an effective strategic position can create an arsenal of assets that reinforce one another, creating advantages that are particualrly difficult for rivals to successfully challenge. • Imitation-Resistant Value Chains While many of the resources below are considered in isolation, the strength of any advantage can be far more significant if firms are able to leverage several of these resources in a way that makes each stronger and makes the firm’s way of doing business more difficult for rivals to match. Firms that craft an imitation-resistant value chain have developed a way of doing business that others will struggle to replicate, and in nearly every successful effort of this kind, technology plays a key enabling role. The value chain is the set of interrelated activities that bring products or services to market (see below). When we compare FreshDirect’s value chain to traditional rivals, there are differences across every element. But most importantly, the elements in FreshDirect’s value chain work together to create and reinforce competitive advantages that others cannot easily copy. Incumbents would be straddled between two business models, unable to reap the full advantages of either. And late-moving pure-play rivals will struggle, as FreshDirect’s lead time allows the firm to develop brand, scale, data, and other advantages that newcomers lack (see below for more on these resources). Key Framework: The Value Chain The value chain is the “set of activities through which a product or service is created and delivered to customers.” There are five primary components of the value chain and four supporting components. The primary components are as follows: • Inbound logistics—getting needed materials and other inputs into the firm from suppliers • Operations—turning inputs into products or services • Outbound logistics—delivering products or services to consumers, distribution centers, retailers, or other partners • Marketing and sales—customer engagement, pricing, promotion, and transaction • Support—service, maintenance, and customer support The secondary components are the following: • Firm infrastructure—functions that support the whole firm, including general management, planning, IS, and finance • Human resource management—recruiting, hiring, training, and development • Technology / research and development—new product and process design • Procurement—sourcing and purchasing functions While the value chain is typically depicted as it’s displayed in the figure below, goods and information don’t necessarily flow in a line from one function to another. For example, an order taken by the marketing function can trigger an inbound logistics function to get components from a supplier, operations functions (to build a product if it’s not available), or outbound logistics functions (to ship a product when it’s available). Similarly, information from service support can be fed back to advise research and development (R&D) in the design of future products. Figure 2.2 The Value Chain When a firm has an imitation-resistant value chain—one that’s tough for rivals to copy while gaining similar benefits—then a firm may have a critical competitive asset. From a strategic perspective, managers can use the value chain framework to consider a firm’s differences and distinctiveness compared to rivals. If a firm’s value chain can’t be copied by competitors without engaging in painful trade-offs, or if the firm’s value chain helps to create and strengthen other strategic assets over time, it can be a key source for competitive advantage. Many of the cases covered in this book, including FreshDirect, Amazon, Zara, Netflix, and eBay, illustrate this point. An analysis of a firm’s value chain can also reveal operational weaknesses, and technology is often of great benefit to improving the speed and quality of execution. Firms can often buy software to improve things, and tools such as supply chain management (SCM; linking inbound and outbound logistics with operations), customer relationship management (CRM; supporting sales, marketing, and in some cases R&D), and enterprise resource planning software (ERP; software implemented in modules to automate the entire value chain), can have a big impact on more efficiently integrating the activities within the firm, as well as with its suppliers and customers. But remember, these software tools can be purchased by competitors, too. While valuable, such software may not yield lasting competitive advantage if it can be easily matched by competitors as well. There’s potential danger here. If a firm adopts software that changes a unique process into a generic one, it may have co-opted a key source of competitive advantage particularly if other firms can buy the same stuff. This isn’t a problem with something like accounting software. Accounting processes are standardized and accounting isn’t a source of competitive advantage, so most firms buy rather than build their own accounting software. But using packaged, third-party SCM, CRM, and ERP software typically requires adopting a very specific way of doing things, using software and methods that can be purchased and adopted by others. During its period of PC-industry dominance, Dell stopped deployment of the logistics and manufacturing modules of a packaged ERP implementation when it realized that the software would require the firm to make changes to its unique and highly successful operating model and that many of the firm’s unique supply chain advantages would change to the point where the firm was doing the same thing using the same software as its competitors. By contrast, Apple had no problem adopting third-party ERP software because the firm competes on product uniqueness rather than operational differences. Dell’s Struggles: Nothing Lasts Forever Michael Dell enjoyed an extended run that took him from assembling PCs in his dorm room as an undergraduate at the University of Texas at Austin to heading the largest PC firm on the planet. For years Dell’s superefficient, vertically integrated manufacturing and direct-to-consumer model combined to help the firm earn seven times more profit on its own systems when compared with comparably configured rival PCs. And since Dell PCs were usually cheaper, too, the firm could often start a price war and still have better overall margins than rivals. It was a brilliant model that for years proved resistant to imitation. While Dell sold direct to consumers, rivals had to share a cut of sales with the less efficient retail chains responsible for the majority of their sales. Dell’s rivals struggled in moving toward direct sales because any retailer sensing its suppliers were competing with it through a direct-sales effort could easily chose another supplier that sold a nearly identical product. It wasn’t that HP, IBM, Sony, and so many others didn’t see the advantage of Dell’s model—these firms were wedded to models that made it difficult for them to imitate their rival. But then Dell’s killer model, one that had become a staple case study in business schools, began to lose steam. Nearly two decades of observing Dell had allowed the contract manufacturers serving Dell’s rivals to improve manufacturing efficiency. Component suppliers located near contract manufacturers, and assembly times fell dramatically. And as the cost of computing fell, the price advantage Dell enjoyed over rivals also shrank in absolute terms. That meant savings from buying a Dell weren’t as big as they once were. On top of that, the direct-to-consumer model also suffered when sales of notebook PCs outpaced the more commoditized desktop market. Notebooks can be considered to be more differentiated than desktops, and customers often want to compare products in person—lift them, type on keyboards, and view screens—before making a purchase decision. In time, these shifts created an opportunity for rivals to knock Dell from its ranking as the world’s number one PC manufacturer. Dell has even abandoned its direct-only business model and now sells products through third-party brick-and-mortar retailers. Dell’s struggles as computers, customers, and the product mix changed, all underscore the importance of continually assessing a firm’s strategic position among changing market conditions. There is no guarantee that today’s winning strategy will dominate forever.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/02%3A_Strategy_and_Technology-_Concepts_and_Frameworks_for_Understanding_What_Separates_Winners_from_Losers/2.01%3A_Introduction.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand the relationship between timing, technology, and the creation of resources for competitive advantage. 2. Argue effectively when faced with broad generalizations about the importance (or lack of importance) of technology and timing to competitive advantage. 3. Recognize the difference between low barriers to entry and the prospects for the sustainability of new entrant’s efforts. Some have correctly argued that the barriers to entry for many tech-centric businesses are low. This argument is particularly true for the Internet where rivals can put up a competing Web site seemingly overnight. But it’s absolutely critical to understand that market entry is not the same as building a sustainable business and just showing up doesn’t guarantee survival. Platitudes like “follow, don’t lead” can put firms dangerously at risk, and statements about low entry barriers ignore the difficulty many firms will have in matching the competitive advantages of successful tech pioneers (Carr 2003). Should Blockbuster have waited while Netflix pioneered? In a year where Netflix profits were up seven-fold, Blockbuster lost more than \$1 billion (Economist 2003). Should Sotheby’s have dismissed seemingly inferior eBay? Sotheby’s lost over \$6 million in 2009; eBay earned nearly \$2.4 billion in profits. Barnes & Noble waited seventeen months to respond to Amazon.com. Amazon now has twelve times the profits of its offline rival and its market cap is over forty-eight times greater.1 Today’s Internet giants are winners because in most cases, they were the first to move with a profitable model and they were able to quickly establish resources for competitive advantage. With few exceptions, established offline firms have failed to catch up to today’s Internet leaders. Timing and technology alone will not yield sustainable competitive advantage. Yet both of these can be enablers for competitive advantage. Put simply, it’s not the time lead or the technology; it’s what a firm does with its time lead and technology. True strategic positioning means that a firm has created differences that cannot be easily matched by rivals. Moving first pays off when the time lead is used to create critical resources that are valuable, rare, tough to imitate, and lack substitutes. Anything less risks the arms race of operational effectiveness. Build resources like brand, scale, network effects, switching costs, or other key assets and your firm may have a shot. But guess wrong about the market or screw up execution and failure or direct competition awaits. It is true that most tech can be copied—there’s little magic in eBay’s servers, Intel’s processors, Oracle’s databases, or Microsoft’s operating systems that past rivals have not at one point improved upon. But the lead that each of these tech-enabled firms had was leveraged to create network effects, switching costs, data assets, and helped build solid and well-respected brands. But Google Arrived Late! Why Incumbents Must Constantly Consider Rivals Yahoo! was able to maintain its lead in e-mail because the firm quickly matched and nullified Gmail’s most significant tech-based innovations before Google could inflict real damage. Perhaps Yahoo! had learned from prior errors. The firm’s earlier failure to respond to Google’s emergence as a credible threat in search advertising gave Sergey Brin and Larry Page the time they needed to build the planet’s most profitable Internet firm. Yahoo! (and many Wall Street analysts) saw search as a commodity—a service the firm had subcontracted out to other firms including Alta Vista and Inktomi. Yahoo! saw no conflict in taking an early investment stake in Google or in using the firm for its search results. But Yahoo! failed to pay attention to Google’s advance. As Google’s innovations in technology and interface remained unmatched over time, this allowed the firm to build its brand, scale, and advertising network (distribution channel) that grew from network effects whereby content providers and advertisers attract one another. These are all competitive resources that rivals have never been able to match. Google’s ability to succeed after being late to the search party isn’t a sign of the power of the late mover, it’s a story about the failure of incumbents to monitor their competitive landscape, recognize new rivals, and react to challenging offerings. That doesn’t mean that incumbents need to respond to every potential threat. Indeed, figuring out which threats are worthy of response is the real skill here. Video rental chain Hollywood Video wasted over \$300 million in an Internet streaming business years before high-speed broadband was available to make the effort work.N. Wingfield, “Netflix vs. the Naysayers,” Wall Street Journal, March 21, 2007. But while Blockbuster avoided the balance sheet–cratering gaffes of Hollywood Video, the firm also failed to respond to Netflix—a new threat that had timed market entry perfectly (see Chapter 4 “Netflix: The Making of an E-commerce Giant and the Uncertain Future of Atoms to Bits”). Firms that quickly get to market with the “right” model can dominate, but it’s equally critical for leading firms to pay close attention to competition and innovate in ways that customers value. Take your eye off the ball and rivals may use time and technology to create strategic resources. Just look at Friendster—a firm that was once known as the largest social network in the United States but has fallen so far behind rivals that it has become virtually irrelevant today. Key Takeaways • It doesn’t matter if it’s easy for new firms to enter a market if these newcomers can’t create and leverage the assets needed to challenge incumbents. • Beware of those who say, “IT doesn’t matter” or refer to the “myth” of the first mover. This thinking is overly simplistic. It’s not a time or technology lead that provides sustainable competitive advantage; it’s what a firm does with its time and technology lead. If a firm can use a time and technology lead to create valuable assets that others cannot match, it may be able to sustain its advantage. But if the work done in this time and technology lead can be easily matched, then no advantage can be achieved, and a firm may be threatened by new entrants Questions and Exercises 1. Does technology lower barriers to entry or raise them? Do low entry barriers necessarily mean that a firm is threatened? 2. Is there such a thing as the first-mover advantage? Why or why not? 3. Why did Google beat Yahoo! in search? 4. A former editor of the Harvard Business Review, Nick Carr, once published an article in that same magazine with the title “IT Doesn’t Matter.” In the article he also offered firms the advice: “Follow, Don’t Lead.” What would you tell Carr to help him improve the way he thinks about the relationship between time, technology, and competitive advantage? 5. Name an early mover that has successfully defended its position. Name another that had been superseded by the competition. What factors contributed to its success or failure? 6. You have just written a word processing package far superior in features to Microsoft Word. You now wish to form a company to market it. List and discuss the barriers your start-up faces. 1FY 2008 net income and June 2009 market cap figures for both firms: http://www.barnesandnobleinc.com/new...cial_only.html and phx.corporate-ir.net/phoenix....l-reportsOther.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/02%3A_Strategy_and_Technology-_Concepts_and_Frameworks_for_Understanding_What_Separates_Winners_from_Losers/2.03%3A_Barriers_to_Entry_Technol.txt
Learning Objectives After studying this section you should be able to do the following: 1. Diagram the five forces of competitive advantage. 2. Apply the framework to an industry, assessing the competitive landscape and the role of technology in influencing the relative power of buyers, suppliers, competitors, and alternatives. Professor and strategy consultant Gary Hamel once wrote in a Fortune cover story that “the dirty little secret of the strategy industry is that it doesn’t have any theory of strategy creation” (Hamel, 1997). While there is no silver bullet for strategy creation, strategic frameworks help managers describe the competitive environment a firm is facing. Frameworks can also be used as brainstorming tools to generate new ideas for responding to industry competition. If you have a model for thinking about competition, it’s easier to understand what’s happening and to think creatively about possible solutions. One of the most popular frameworks for examining a firm’s competitive environment is Porter’s five forces, also known as the Industry and Competitive Analysis. As Porter puts it, “analyzing [these] forces illuminates an industry’s fundamental attractiveness, exposes the underlying drivers of average industry profitability, and provides insight into how profitability will evolve in the future.” The five forces this framework considers are (1) the intensity of rivalry among existing competitors, (2) the threat of new entrants, (3) the threat of substitute goods or services, (4) the bargaining power of buyers, and (5) the bargaining power of suppliers (see Figure 2.6 “The Five Forces of Industry and Competitive Analysis”). Figure 2.6 The Five Forces of Industry and Competitive Analysis New technologies can create jarring shocks in an industry. Consider how the rise of the Internet has impacted the five forces for music retailers. Traditional music retailers like Tower and Virgin found that customers were seeking music online. These firms scrambled to invest in the new channel out of what is perceived to be a necessity. Their intensity of rivalry increases because they not only compete based on the geography of where brick-and-mortar stores are physically located, they now compete online as well. Investments online are expensive and uncertain, prompting some firms to partner with new entrants such as Amazon. Free from brick-and-mortar stores, Amazon, the dominant new entrant, has a highly scalable cost structure. And in many ways the online buying experience is superior to what customers saw in stores. Customers can hear samples of almost all tracks, selection is seemingly limitless (the long tail phenomenon—see this concept illuminated in Chapter 4 “Netflix: The Making of an E-commerce Giant and the Uncertain Future of Atoms to Bits”), and data is leveraged using collaborative filtering software to make product recommendations and assist in music discovery1. Tough competition, but it gets worse because CD sales aren’t the only way to consume music. The process of buying a plastic disc now faces substitutes as digital music files become available on commercial music sites. Who needs the physical atoms of a CD filled with ones and zeros when you can buy the bits one song at a time? Or don’t buy anything and subscribe to a limitless library instead. From a sound quality perspective, the substitute good of digital tracks purchased online is almost always inferior to their CD counterparts. To transfer songs quickly and hold more songs on a digital music player, tracks are encoded in a smaller file size than what you’d get on a CD, and this smaller file contains lower playback fidelity. But the additional tech-based market shock brought on by digital music players (particularly the iPod) has changed listening habits. The convenience of carrying thousands of songs trumps what most consider just a slight quality degradation. ITunes is now responsible for selling more music than any other firm, online or off. Most alarming to the industry is the other widely adopted substitute for CD purchases—theft. Illegal music “sharing” services abound, even after years of record industry crackdowns. And while exact figures on real losses from online piracy are in dispute, the music industry has seen album sales drop by 45 percent in less than a decade (Barnes, 2009). All this choice gives consumers (buyers) bargaining power. They demand cheaper prices and greater convenience. The bargaining power of suppliers—the music labels and artists—also increases. At the start of the Internet revolution, retailers could pressure labels to limit sales through competing channels. Now, with many of the major music retail chains in bankruptcy, labels have a freer hand to experiment, while bands large and small have new ways to reach fans, sometimes in ways that entirely bypass the traditional music labels. While it can be useful to look at changes in one industry as a model for potential change in another, it’s important to realize that the changes that impact one industry do not necessarily impact other industries in the same way. For example, it is often suggested that the Internet increases bargaining power of buyers and lowers the bargaining power of suppliers. This suggestion is true for some industries like auto sales and jewelry where the products are commodities and the price transparency of the Internet counteracts a previous information asymmetry where customers often didn’t know enough information about a product to bargain effectively. But it’s not true across the board. In cases where network effects are strong or a seller’s goods are highly differentiated, the Internet can strengthen supplier bargaining power. The customer base of an antique dealer used to be limited by how many likely purchasers lived within driving distance of a store. Now with eBay, the dealer can take a rare good to a global audience and have a much larger customer base bid up the price. Switching costs also weaken buyer bargaining power. Wells Fargo has found that customers who use online bill pay (where switching costs are high) are 70 percent less likely to leave the bank than those who don’t, suggesting that these switching costs help cement customers to the company even when rivals offer more compelling rates or services. Tech plays a significant role in shaping and reshaping these five forces, but it’s not the only significant force that can create an industry shock. Government deregulation or intervention, political shock, and social and demographic changes can all play a role in altering the competitive landscape. Because we live in an age of constant and relentless change, mangers need to continually visit strategic frameworks to consider any market-impacting shifts. Predicting the future is difficult, but ignoring change can be catastrophic. Key Takeaways • Industry competition and attractiveness can be described by considering the following five forces: (1) the intensity of rivalry among existing competitors, (2) the potential for new entrants to challenge incumbents, (3) the threat posed by substitute products or services, (4) the power of buyers, and (5) the power of suppliers. • In markets where commodity products are sold, the Internet can increase buyer power by increasing price transparency. • The more differentiated and valuable an offering, the more the Internet shifts bargaining power to sellers. Highly differentiated sellers that can advertise their products to a wider customer base can demand higher prices. • A strategist must constantly refer to models that describe events impacting their industry, particularly as new technologies emerge. Questions and Exercises 1. What are Porter’s “five forces”? 2. Use the five forces model to illustrate competition in the newspaper industry. Are some competitors better positioned to withstand this environment than others? Why or why not? What role do technology and resources for competitive advantage play in shaping industry competition? 3. What is price transparency? What is information asymmetry? How does the Internet relate to these two concepts? How does the Internet shift bargaining power among the five forces? 4. How has the rise of the Internet impacted each of the five forces for music retailers? 5. In what ways is the online music buying experience superior to that of buying in stores? 6. What is the substitute for music CDs? What is the comparative sound quality of the substitute? Why would a listener accept an inferior product? 7. Based on Porter’s five forces, is this a good time to enter the retail music industry? Why or why not? 8. What is the cost to the music industry of music theft? Cite your source. 9. Discuss the concepts of price transparency and information asymmetry as they apply to the diamond industry as a result of the entry of BlueNile. Name another industry where the Internet has had a similar impact. 10. Under what conditions can the Internet strengthen supplier bargaining power? Give an example. 11. What is the effect of switching costs on buyer bargaining power? Give an example. 12. How does the Internet impact bargaining power for providers of rare or highly differentiated goods? Why? 1For more on the long tail and collaborative filtering, see Chapter 4 “Netflix: The Making of an E-commerce Giant and the Uncertain Future of Atoms to Bits”
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/02%3A_Strategy_and_Technology-_Concepts_and_Frameworks_for_Understanding_What_Separates_Winners_from_Losers/2.04%3A_Key_Framework-_The_Five_F.txt
Learning Objective After studying this section you should be able to do the following: 1. Understand how Zara’s parent company Inditex leveraged a technology-enabled strategy to become the world’s largest fashion retailer. The poor, ship-building town of La Coruña in northern Spain seems an unlikely home to a tech-charged innovator in the decidedly ungeeky fashion industry, but that’s where you’ll find “The Cube,” the gleaming, futuristic central command of the Inditex Corporation (Industrias de Diseño Textil), parent of game-changing clothes giant, Zara. The blend of technology-enabled strategy that Zara has unleashed seems to break all of the rules in the fashion industry. The firm shuns advertising and rarely runs sales. Also, in an industry where nearly every major player outsources manufacturing to low-cost countries, Zara is highly vertically integrated, keeping huge swaths of its production process in-house. These counterintuitive moves are part of a recipe for success that’s beating the pants off the competition, and it has turned the founder of Inditex, Amancio Ortega, into Spain’s wealthiest man and the world’s richest fashion executive. Figure 3.1 Zara’s operations are concentrated in Spain, but they have stores around the world like these in Tokyo and Canada. Alberto Garcia – Zara – CC BY-SA 2.0; bargainmoose – Zara Store Canada – CC BY 2.0. 3.02: Dont Guess Gather Data Learning Objective After studying this section you should be able to do the following: 1. Contrast Zara’s approach with the conventional wisdom in fashion retail, examining how the firm’s strategic use of information technology influences design and product offerings, manufacturing, inventory, logistics, marketing, and ultimately profitability. Having the wrong items in its stores hobbled Gap for nearly a decade. But how do you make sure stores carry the kinds of things customers want to buy? Try asking them. Zara’s store managers lead the intelligence-gathering effort that ultimately determines what ends up on each store’s racks. Armed with personal digital assistants (PDAs)—handheld computing devices meant largely for mobile use outside an office setting—to gather customer input, staff regularly chat up customers to gain feedback on what they’d like to see more of. A Zara manager might casually ask, “What if this skirt were in a longer length?” “Would you like it in a different color?” “What if this V-neck blouse were available in a round neck?” Managers are motivated because they have skin in the game. The firm is keen to reward success—as much as 70 percent of salaries can come from commissions (Capell, 2008). Another level of data gathering starts as soon as the doors close. Then the staff turns into a sort of investigation unit in the forensics of trendspotting, looking for evidence in the piles of unsold items that customers tried on but didn’t buy. Are there any preferences in cloth, color, or styles offered among the products in stock (Sull & Turconi, 2008)? PDAs are also linked to the store’s point-of-sale (POS) system—a transaction process that captures customer purchase information—showing how garments rank by sales. In less than an hour, managers can send updates that combine the hard data captured at the cash register with insights on what customers would like to see (Rohwedder & Johnson, 2008). All this valuable data allows the firm to plan styles and issue rebuy orders based on feedback rather than hunches and guesswork. The goal is to improve the frequency and quality of decisions made by the design and planning teams. • Design Rather than create trends by pushing new lines via catwalk fashion shows, Zara designs follow evidence of customer demand. Data on what sells and what customers want to see goes directly to “The Cube” outside La Coruña, where teams of some three hundred designers crank out an astonishing thirty thousand items a year versus two to four thousand items offered up at big chains like H&M (the world’s third largest fashion retailer) and Gap (Pfeifer, 2007)1. While H&M has offered lines by star designers like Stella McCartney and Karl Lagerfeld, as well as celebrity collaborations with Madonna and Kylie Minogue, the Zara design staff consists mostly of young, hungry Project Runway types fresh from design school. There are no prima donnas in “The Cube.” Team members must be humble enough to accept feedback from colleagues and share credit for winning ideas. Individual bonuses are tied to the success of the team, and teams are regularly rotated to cross-pollinate experience and encourage innovation. • Manufacturing and Logistics In the fickle world of fashion, even seemingly well-targeted designs could go out of favor in the months it takes to get plans to contract manufacturers, tool up production, then ship items to warehouses and eventually to retail locations. But getting locally targeted designs quickly onto store shelves is where Zara really excels. In one telling example, when Madonna played a set of concerts in Spain, teenage girls arrived to the final show sporting a Zara knockoff of the outfit she wore during her first performance1. The average time for a Zara concept to go from idea to appearance in store is fifteen days versus their rivals who receive new styles once or twice a season. Smaller tweaks arrive even faster. If enough customers come in and ask for a round neck instead of a V neck, a new version can be in stores with in just ten days (Tagliabue, 2003). To put that in perspective, Zara is twelve times faster than Gap despite offering roughly ten times more unique products (Helft, 2002)! At H&M, it takes three to five months to go from creation to delivery—and they’re considered one of the best. Other retailers need an average of six months to design a new collection and then another three months to manufacture it. VF Corp (Lee, Wrangler) can take nine months just to design a pair of jeans, while J. Jill needs a year to go from concept to store shelves (Sullivan, 2005). At Zara, most of the products you see in stores didn’t exist three weeks earlier, not even as sketches (Surowiecki, 2000). The firm is able to be so responsive through a competitor-crushing combination of vertical integration and technology-orchestrated coordination of suppliers, just-in-time manufacturing, and finely tuned logistics. Vertical integration is when a single firm owns several layers in its value chain. While H&M has nine hundred suppliers and no factories, nearly 60 percent of Zara’s merchandise is produced in-house, with an eye on leveraging technology in those areas that speed up complex tasks, lower cycle time, and reduce error. Profits from this clothing retailer come from blending math with a data-driven fashion sense. Inventory optimization models help the firm determine how many of which items in which sizes should be delivered to each specific store during twice-weekly shipments, ensuring that each store is stocked with just what it needs Gentry, 2007). Outside the distribution center in La Coruña, fabric is cut and dyed by robots in twenty-three highly automated factories. Zara is so vertically integrated, the firm makes 40 percent of its own fabric and purchases most of its dyes from its own subsidiary. Roughly half of the cloth arrives undyed so the firm can respond as any midseason fashion shifts occur. After cutting and dying, many items are stitched together through a network of local cooperatives that have worked with Inditex so long they don’t even operate with written contracts. The firm does leverage contract manufacturers (mostly in Turkey and Asia) to produce staple items with longer shelf lives, such as t-shirts and jeans, but such goods account for only about one-eighth of dollar volume (Tokatli, 2008). All of the items the firm sells end up in a five-million-square-foot distribution center in La Coruña, or a similar facility in Zaragoza in the northeast of Spain. The La Coruña facility is some nine times the size of Amazon’s warehouse in Fernley, Nevada, or about the size of ninety football fields Helft (2002). The facilities move about two and a half million items every week, with no item staying in-house for more than seventy-two hours. Ceiling-mounted racks and customized sorting machines patterned on equipment used by overnight parcel services, and leveraging Toyota-designed logistics, whisk items from factories to staging areas for each store. Clothes are ironed in advance and packed on hangers, with security and price tags affixed. This system means that instead of wrestling with inventory during busy periods, employees in Zara stores simply move items from shipping box to store racks, spending most of their time on value-added functions like helping customers find what they want. Efforts like this help store staff regain as much as three hours in prime selling time (Rohwedder & Johnson, 2008; Capell, 2008). Trucks serve destinations that can be reached overnight, while chartered cargo flights serve farther destinations within forty-eight hours (Capell, 2008). The firm recently tweaked its shipping models through Air France–KLM Cargo and Emirates Air so flights can coordinate outbound shipment of all Inditex brands with return legs loaded with raw materials and half-finished clothes items from locations outside of Spain. Zara is also a pioneer in going green. In fall 2007, the firm’s CEO unveiled an environmental strategy that includes the use of renewable energy systems at logistics centers including the introduction of biodiesel for the firm’s trucking fleet. • Stores Most products are manufactured for a limited production run. While running out of bestsellers might be seen as a disaster at most retailers, at Zara the practice delivers several benefits. First, limited runs allow the firm to cultivate the exclusivity of its offerings. While a Gap in Los Angeles carries nearly the same product line as one in Milwaukee, each Zara store is stocked with items tailored to the tastes of its local clientele. A Fifth Avenue shopper quips, “At Gap, everything is the same,” while a Zara shopper in Madrid says, “You’ll never end up looking like someone else” (Capell, 2006). Upon visiting a Zara, the CEO of the National Retail Federation marveled, “It’s like you walk into a new store every two weeks” (Helft, 2002). Second, limited runs encourage customers to buy right away and at full price. Savvy Zara shoppers know the newest items arrive on black plastic hangers, with store staff transferring items to wooden ones later on. Don’t bother asking when something will go on sale; if you wait three weeks the item you wanted has almost certainly been sold or moved out to make room for something new. Says one twenty-three year-old Barcelona shopper, “If you see something and don’t buy it, you can forget about coming back for it because it will be gone” (Capell, 2006) A study by consulting firm Bain & Company estimated that the industry average markdown ratio is approximately 50 percent, while Zara books some 85 percent of its products at full price (Sull & Turconi, 2008; Capell, 2006). The constant parade of new, limited-run items also encourages customers to visit often. The average Zara customer visits the store seventeen times per year, compared with only three annual visits made to competitors (Kumar & Linguri, 2006). Even more impressive—Zara puts up these numbers with almost no advertising. The firm’s founder has referred to advertising as a “pointless distraction.” The assertion carries particular weight when you consider that during Gap’s collapse, the firm increased advertising spending but sales dropped (Bhatnagar, 2004). Fashion retailers spend an average of 3.5 percent of revenue promoting their products, while ad spending at Inditex is just 0.3 percent3. Finally, limited production runs allow the firm to, as Zara’s CEO once put it, “reduce to a minimum the risk of making a mistake, and we do make mistakes with our collections” (Vitzthum, 2001). Failed product introductions are reported to be just 1 percent, compared with the industry average of 10 percent (Kumar & Linguri, 2006). So even though Zara has higher manufacturing costs than rivals, Inditex gross margins are 56.8 percent compared to 37.5 percent at Gap (Rohwedder, 2009; Capell, 2008). While stores provide valuable front-line data, headquarters plays a major role in directing in-store operations. Software is used to schedule staff based on each store’s forecasted sales volume, with locations staffing up at peak times such as lunch or early evening. The firm claims these more flexible schedules have shaved staff work hours by 2 percent. This constant refinement of operations throughout the firm’s value chain has helped reverse a prior trend of costs rising faster than sales (Rohwedder & Johnson, 2008). Even the store displays are directed from “The Cube,” where a basement staging area known as “Fashion Street” houses a Potemkin village of bogus storefronts meant to mimic some of the chain’s most exclusive locations throughout the world. It’s here that workers test and fine-tune the chain’s award-winning window displays, merchandise layout, and even determine the in-store soundtrack. Every two weeks, new store layout marching orders are forwarded to managers at each location (Rohwedder & Johnson, 2008). Technology ≠ Systems. Just Ask Prada Here’s another interesting thing about Zara. Given the sophistication and level of technology integration within the firm’s business processes, you’d think that Inditex would far outspend rivals on tech. But as researchers Donald Sull and Stefano Turconi discovered, “Whether measured by IT workers as a percentage of total employees or total spending as a percentage of sales, Zara’s IT expenditure is less than one-fourth the fashion industry average” (Sull & Turconi, 2008). Zara excels by targeting technology investment at the points in its value chain where it will have the most significant impact, making sure that every dollar spent on tech has a payoff. Contrast this with high-end fashion house Prada’s efforts at its flagship Manhattan location. The firm hired the Pritzker Prize–winning hipster architect Rem Koolhaas to design a location Prada would fill with jaw-dropping technology. All items for sale in the store would sport with radio frequency identification (RFID) tags (small chip-based tags that wirelessly emit a unique identifying code for the item that they are attached to). Walk into a glass dressing room and customers could turn the walls opaque, then into a kind of combination mirror and heads-up display. By wirelessly reading the tags on each garment, dressing rooms would recognize what was brought in and make recommendations of matching accessories as well as similar products that patrons might consider. Customers could check inventory, and staff sporting PDAs could do the same. A dressing room camera would allow clients to see their front and back view side-by-side as they tried on clothes. It all sounded slick, but execution of the vision was disastrous. Customers didn’t understand the foot pedals that controlled the dressing room doors and displays. Reports surfaced of fashionistas disrobing in full view, thinking the walls went opaque when they didn’t. Others got stuck in dressing rooms when pedals failed to work, or doors broke, unable to withstand the demands of the high-traffic tourist location. The inventory database was often inaccurate, regularly reporting items as out of stock even though they weren’t. As for the PDAs, staff reported that they “don’t really use them anymore” and that “we put them away so tourists don’t play with them.” The investment in Prada’s in-store technology was also simply too high, with estimates suggesting the location took in just one-third the sales needed to justify expenses (Lindsay, 2004). The Prada example offers critical lessons for managers. While it’s easy to get seduced by technology, an information system (IS) is actually made up of more than hardware and software. An IS also includes data used or created by the system, as well as the procedures and the people who interact with the system (Sanchenko, 2007). Getting the right mix of these five components is critical to executing a flawless information system rollout. Financial considerations should forecast the return on investment (ROI)—the amount earned from an expenditure—of any such effort (i.e., what will we get for our money and how long will it take to receive payback?). And designers need to thoroughly test the system before deployment. At Prada’s Manhattan flagship store, the effort looked like tech chosen because it seemed fashionable rather than functional. Key Takeaways • Zara store management and staff use PDAs and POS systems to gather and analyze customer preference data to plan future designs based on feedback, rather than on hunches and guesswork. • Zara’s combination of vertical integration and technology-orchestrated supplier coordination, just-in-time manufacturing, and logistics allows it to go from design to shelf in days instead of months. • Advantages accruing to Inditex include fashion exclusivity, fewer markdowns and sales, lower marketing expenses, and more frequent customer visits. • Zara’s IT expenditures are low by fashion industry standards. The spectacular benefits reaped by Zara from the deployment of technology have resulted from targeting technology investment at the points in the value chain where it has the greatest impact, and not from the sheer magnitude of the investment. This is in stark contrast to Prada’s experience with in-store technology deployment. • While information technology is just hardware and software, information systems also include data, people, and procedures. It’s critical for managers to think about systems, rather than just technologies, when planning for and deploying technology-enabled solutions. Questions and Exercises 1. In what ways is the Zara model counterintuitive? In what ways has Zara’s model made the firm a better performer than Gap and other competitors? 2. What factors account for a firm’s profit margin? What does Gap focus on? What factors does Zara focus on to ensure a strong profit margin? 3. How is data captured in Zara stores? Using what types or classifications of information systems? How does the firm use this data? 4. What role does technology play in enabling the other elements of Zara’s counterintuitive strategy? Could the firm execute its strategy without technology? Why or why not? 5. How does technology spending at Zara compare to that of rivals? Advertising spending? Failed product percentages? Markdowns? 6. What risks are inherent in the conventional practices in the fashion industry? Is Zara susceptible to these risks? Is Zara susceptible to different risks? If so, what are these? 7. Consider the Prada case mentioned in the sidebar “Technology ≠ Systems.” What did Prada fail to consider when it rolled out the technology in its flagship location? Could this effort have been improved for better results? If you were put in charge of this kind of effort, what factors would you consider? What would determine whether you’d go forward with the effort or not? If you did go forward, what factors would you consider and how might you avoid some of the mistakes made by Prada?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/03%3A_Zara-_Fast_Fashion_from_Savvy_Systems/3.01%3A_Introduction.txt
Learning Objectives After studying this section you should be able to do the following: 1. Detail how Zara’s approach counteracts specific factors that Gap has struggled with for over a decade. 2. Identify the environmental threats that Zara is likely to face, and consider options available to the firm for addressing these threats. The holy grail for the strategist is to craft a sustainable competitive advantage that is difficult for competitors to replicate. And for nearly two decades Zara has delivered the goods. But that’s not to say the firm is done facing challenges. Consider the limitations of Zara’s Spain-centric, just-in-time manufacturing model. By moving all of the firm’s deliveries through just two locations, both in Spain, the firm remains hostage to anything that could create a disruption in the region. Firms often hedge risks that could shut down operations—think weather, natural disaster, terrorism, labor strife, or political unrest—by spreading facilities throughout the globe. If problems occur in northern Spain, Zara has no such fallback. In addition to the operations vulnerabilities above, the model also leaves the firm potentially more susceptible to financial vulnerabilities during periods when the euro strengthens relative to the dollar. Many low-cost manufacturing regions have currencies that are either pegged to the dollar or have otherwise fallen against the euro. This situation means Zara’s Spain-centric costs rise at higher rates compared to competitors, presenting a challenge in keeping profit margins in check. Rising transportation costs are another concern. If fuel costs rise, the model of twice-weekly deliveries that has been key to defining the Zara experience becomes more expensive to maintain. Still, Zara is able to make up for some cost increases by raising prices overseas (in the United States, Zara items can cost 40 percent or more than they do in Spain). Zara reports that all North American stores are profitable, and that it can continue to grow its presence, serving forty to fifty stores with just two U.S. jet flights a week (Tagliabue, 2003). Management has considered a logistics center in Asia, but expects current capacity will suffice until 2013 (Rohwedder & Johnson, 2008). Another possibility might be a center in the Maquiladora region of northern Mexico, which could serve the U.S. markets via trucking capacity similar to the firm’s Spain-based access to Europe, while also providing a regional center to serve expansion throughout the Western Hemisphere. Rivals have studied the Zara recipe, and while none have attained the efficiency of Amancio Ortega’s firm, many are trying to learn from the master. There is precedent for contract firms closing the cycle time gap with vertically integrated competitors that own their own factories. Dell (a firm that builds its own PCs while nearly all its competitors use contract labor) has recently seen its manufacturing advantage from vertical integration fall as the partners that supply rivals have mimicked its techniques and have become far more efficient (Friscia, et. al., 2009). In terms of the number of new models offered, clothing is actually more complex than computing, suggesting that Zara’s value chain may be more difficult to copy. Still, H&M has increased the frequency of new items in stores, Forever 21 and Uniqlo get new looks within six weeks, and Renner, a Brazilian fast fashion rival, rolls out mini collections every two months (Pfeifer, 2007; Rohwedder & Johnson, 2008). Rivals have a keen eye on Inditex, with the CFO of luxury goods firm Burberry claiming the firm is a “fantastic case study” and “we’re mindful of their techniques” (Rohwedder & Johnson, 2008). Finally, firm financial performance can also be impacted by broader economic conditions. When the economy falters, consumers simply buy less and may move a greater share of their wallet to less-stylish and lower-cost offerings from deep discounters like Wal-Mart . Zara is particularly susceptible to conditions in Spain, since the market accounts for nearly 40 percent of Inditex sales (Hall, 2008), as well as to broader West European conditions (which with Spain make up 79 percent of sales) (Rohwedder, 2009). Global expansion will provide the firm with a mix of locations that may be better able to endure downturns in any single region . Recent Spanish and European financial difficulties have made clear the need to decrease dependence on sales within one region. Zara’s winning formula can only exist through management’s savvy understanding of how information systems can enable winning strategies (many tech initiatives were led by José Maria Castellano, a “technophile” business professor who became Ortega’s right-hand man in the 1980s) (Rohwedder & Johnson, 2008). It is technology that helps Zara identify and manufacture the clothes customers want, get those products to market quickly, and eliminate costs related to advertising, inventory missteps, and markdowns. A strategist must always scan the state of the market as well as the state of the art in technology, looking for new opportunities and remaining aware of impending threats. With systems so highly tuned for success, it may be unwise to bet against “The Cube.” Key Takeaway • Zara’s value chain is difficult to copy; but it is not invulnerable, nor is future dominance guaranteed. Zara management must be aware of the limitations in its business model, and must continually scan its environment and be prepared to react to new threats and opportunities. Questions and Exercises 1. The Zara case shows how information systems can impact every single management discipline. Which management disciplines were mentioned in this case? How does technology impact each? 2. Would a traditional Internet storefront work well with Zara’s business model? Why or why not? 3. Zara’s just-in-time, vertically integrated model has served the firm well, but an excellent business is not a perfect business. Describe the limitations of Zara’s model and list steps that management might consider to minimize these vulnerabilities.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/03%3A_Zara-_Fast_Fashion_from_Savvy_Systems/3.03%3A_Moving_Forward.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand the basics of the Netflix business model. 2. Recognize the downside the firm may have experienced from an early IPO. 3. Appreciate why other firms found Netflix’s market attractive, and why many analysts incorrectly suspected Netflix was doomed. Entrepreneurs are supposed to want to go public. When a firm sells stock for the first time, the company gains a ton of cash to fuel expansion and its founders get rich. Going public is the dream in the back of the mind of every tech entrepreneur. But in 2007, Netflix founder and CEO Reed Hastings told Fortune that if he could change one strategic decision, it would have been to delay the firm’s initial public stock offering (IPO): “If we had stayed private for another two to four years, not as many people would have understood how big a business this could be” (Boyle, 2007). Once Netflix was a public company, financial disclosure rules forced the firm to reveal that it was on a money-minting growth tear. Once the secret was out, rivals showed up. Hollywood’s best couldn’t have scripted a more menacing group of rivals for Hastings to face. First in line with its own DVD-by-mail offering was Blockbuster, a name synonymous with video rental. Some 40 million U.S. families were already card-carrying Blockbuster customers, and the firm’s efforts promised to link DVD-by-mail with the nation’s largest network of video stores. Following close behind was Wal-Mart—not just a big Fortune 500 company but the largest firm in the United States ranked by sales. In Netflix, Hastings had built a great firm, but let’s face it, his was a dot-com, an Internet pure play without a storefront and with an overall customer base that seemed microscopic compared to these behemoths. Before all this, Netflix was feeling so confident that it had actually raised prices. Customers loved the service, the company was dominating its niche, and it seemed like the firm could take advantage of a modest price hike, pull in more revenue, and use this to improve and expand the business. But the firm was surprised by how quickly the newcomers mimicked Netflix with cheaper rival efforts. This new competition forced Netflix to cut prices even lower than where they had been before the price increase. To keep pace, Netflix also upped advertising at a time when online ad rates were increasing. Big competitors, a price war, spending on the rise—how could Netflix possibly withstand this onslaught? Some Wall Street analysts had even taken to referring to Netflix’s survival prospects as “The Last Picture Show” (Conlin, 2007). Fast-forward a year later and Wal-Mart had cut and run, dumping their experiment in DVD-by-mail. Blockbuster had been mortally wounded, hemorrhaging billions of dollars in a string of quarterly losses. And Netflix? Not only had the firm held customers, it grew bigger, recording record profits. The dot-com did it. Hastings, a man who prior to Netflix had already built and sold one of the fifty largest public software firms in the United States, had clearly established himself as one of America’s most capable and innovative technology leaders. In fact, at roughly the same time that Blockbuster CEO John Antioco resigned, Reed Hastings accepted an appointment to the Board of Directors of none other than the world’s largest software firm, Microsoft. Like the final scene in so many movies where the hero’s face is splashed across the news, Time named Hastings as one of the “100 most influential global citizens.” • Why Study Netflix? Studying Netflix gives us a chance to examine how technology helps firms craft and reinforce a competitive advantage. We’ll pick apart the components of the firm’s strategy and learn how technology played a starring role in placing the firm atop its industry. We also realize that while Netflix emerged the victorious underdog at the end of the first show, there will be at least one sequel, with the final scene yet to be determined. We’ll finish the case with a look at the very significant challenges the firm faces as new technology continues to shift the competitive landscape. How Netflix Works Reed Hastings, a former Peace Corps volunteer with a master’s in computer science, got the idea for Netflix when he was late in returning the movie Apollo 13 to his local video store. The forty-dollar late fee was enough to have bought the video outright with money left over. Hastings felt ripped off, and out of this initial outrage, Netflix was born. The model the firm eventually settled on was a DVD-by-mail service that charged a flat-rate monthly subscription rather than a per-disc rental fee. Customers don’t pay a cent in mailing expenses, and there are no late fees. Netflix offers nine different subscription plans, starting at less than five dollars. The most popular is a \$16.99 option that offers customers three movies at a time and unlimited returns each month. Videos arrive in red Mylar envelopes. After tearing off the cover to remove the DVD, customers reveal prepaid postage and a return address. When done watching videos, consumers just slip the DVD back into the envelope, reseal it with a peel-back sticky-strip, and drop the disc in the mail. Users make their video choices in their “request queue” at Netflix.com. If a title isn’t available, Netflix simply moves to the next title in the queue. Consumers use the Web site to rate videos they’ve seen, specify their movie preferences, get video recommendations, check out DVD details, and even share their viewing habits and reviews. In 2007, the firm added a “Watch Now” button next to those videos that could be automatically streamed to a PC. Any customer paying at least \$8.99 for a DVD-by-mail subscription plan can stream an unlimited number of videos each month at no extra cost. Key Takeaways • Analysts and managers have struggled to realize that dot-com start-up Netflix could actually create sustainable competitive advantage, beating back challenges from Wal-Mart and Blockbuster, among others. • Data disclosure required by public companies may have attracted these larger rivals to the firm’s market. • Netflix operates via a DVD subscription and video streaming model. Although sometimes referred to as “rental,” the model is really a substitute good for conventional use-based media rental. Questions and Exercises 1. How does the Netflix business model work? 2. Which firms are or have been Netflix’s most significant competitors? How do their financial results or performance of their efforts compare to Netflix’s efforts? 3. What recent appointment did Reed Hastings accept in addition to his job as Netflix CEO? Why is this appointment potentially important for Netflix? 4. Why did Wal-Mart and Blockbuster managers, as well as Wall Street analysts, underestimate Netflix? What issues might you advise analysts and managers to consider so that they avoid making these sorts of mistakes in the future?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/04%3A_Netflix-_The_Making_of_an_E-commerce_Giant_and_the_Uncertain_Future_of_Atoms_to_Bits/4.01%3A_Introduction.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand how many firms have confused brand and advertising, why branding is particularly important for online firms, and the factors behind Netflix’s exceptional brand strength. 2. Understand the long tail concept, and how it relates to Netflix’s ability to offer the customer a huge (the industry’s largest) selection of movies. 3. Know what collaborative filtering is, how Netflix uses collaborative filtering software to match movie titles with the customer’s taste, and in what ways this software helps Netflix garner sustainable competitive advantage. 4. List and discuss the several technologies Netflix uses in its operations to reduce costs and deliver customer satisfaction and enhance brand value. 5. Understand the role that scale economies play in Netflix’s strategies, and how these scale economies pose an entry barrier to potential competitors. 6. Understand the role that market entry timing has played in the firm’s success. To understand Netflix’s strengths, it’s important to view the firm as its customers see it. And for the most part, what they see they like—a lot! Netflix customers are rabidly loyal and rave about the service. The firm repeatedly ranks at the top of customer satisfaction surveys. Ratings agency ForeSee has named Netflix the number one e-commerce site in terms of customer satisfaction nine times in a row (placing it ahead of Apple and Amazon, among others). Netflix has also been cited as the best at satisfying customers by Nielsen and Fast Company, and was also named the Retail Innovator of the Year by the National Retail Federation. Building a great brand, especially one online, starts with offering exceptional value to the customer. Don’t confuse branding with advertising. During the dot-com era, firms thought brands could be built through Super Bowl ads and expensive television promotion. Advertising can build awareness, but brands are built through customer experience. This is a particularly important lesson for online firms. Have a bad experience at a burger joint and you might avoid that location but try another of the firm’s outlets a few blocks away. Have a bad experience online and you’re turned off by the firm’s one and only virtual storefront. If you click over to an online rival, the offending firm may have lost you forever. But if a firm can get you to stay through quality experience, switching costs and data-driven value might keep you there for a long, long time, even when new entrants try to court you away. If brand is built through customer experience, consider what this means for the Netflix subscriber. They expect the firm to offer a huge selection, to be able to find what they want, for it to arrive on time, for all of this to occur with no-brainer ease of use and convenience, and at a fair price. Technology drives all of these capabilities, so tech is at the very center of the firm’s brand building efforts. Let’s look at how the firm does it. • Selection: The Long Tail in Action Customers have flocked to Netflix in part because of the firm’s staggering selection. A traditional video store (and Blockbuster had some 7,800 of them) stocks roughly three thousand DVD titles on its shelves. For comparison, Netflix is able to offer its customers a selection of over one hundred thousand DVD titles, and rising! At traditional brick-and-mortar retailers, shelf space is the biggest constraint limiting a firm’s ability to offer customers what they want when they want it. Just which films, documentaries, concerts, cartoons, TV shows, and other fare make it inside the four walls of a Blockbuster store is dictated by what the average consumer is most likely to be interested in. To put it simply, Blockbuster stocks blockbusters. Finding the right product mix and store size can be tricky. Offer too many titles in a bigger storefront and there may not be enough paying customers to justify stocking less popular titles (remember, it’s not just the cost of the DVD—firms also pay for the real estate of a larger store, the workers, the energy to power the facility, etc.). You get the picture—there’s a breakeven point that is arrived at by considering the geographic constraint of the number of customers that can reach a location, factored in with store size, store inventory, the payback from that inventory, and the cost to own and operate the store. Anyone who has visited a video store only to find a title out of stock has run up against the limits of the physical store model. But many online businesses are able to run around these limits of geography and shelf space. Internet firms that ship products can get away with having just a few highly automated warehouses, each stocking just about all the products in a particular category. And for firms that distribute products digitally (think songs on iTunes), the efficiencies are even greater because there’s no warehouse or physical product at all (more on that later). Offer a nearly limitless selection and something interesting happens: there’s actually more money to be made selling the obscure stuff than the hits. Music service Rhapsody makes more from songs outside of the top ten thousand than it does from songs ranked above ten thousand. At Amazon.com, roughly 60 percent of books sold are titles that aren’t available in even the biggest Borders or Barnes & Noble Superstores (Anderson, 2004). And at Netflix, roughly 75 percent of DVD titles shipped are from back-catalog titles, not new releases (at Blockbuster outlets the equation is nearly flipped, with some 70 percent of business coming from new releases) (McCarthy, 2009). Consider that Netflix sends out forty-five thousand different titles each day. That’s fifteen times the selection available at your average video store! Each quarter, roughly 95 percent of titles are viewed—that means that every few weeks Netflix is able to find a customer for nearly every DVD title that has ever been commercially released. This phenomenon whereby firms can make money by selling a near-limitless selection of less-popular products is known as the long tail. The term was coined by Chris Anderson, an editor at Wired magazine, who also wrote a best-selling business book by the same name. The “tail” (see Figure 4.2 “The Long Tail”) refers to the demand for less popular items that aren’t offered by traditional brick-and-mortar shops. While most stores make money from the area under the curve from the vertical axis to the dotted line, long tail firms can also sell the less popular stuff. Each item under the right part of the curve may experience less demand than the most popular products, but someone somewhere likely wants it. And as demonstrated from the examples above, the total demand for the obscure stuff is often much larger than what can be profitably sold through traditional stores alone. While some debate the size of the tail (e.g., whether obscure titles collectively are more profitable for most firms), two facts are critical to keep above this debate: (1) selection attracts customers, and (2) the Internet allows large-selection inventory efficiencies that offline firms can’t match. Figure 4.2 The Long Tail The long tail works because the cost of production and distribution drop to a point where it becomes economically viable to offer a huge selection. For Netflix, the cost to stock and ship an obscure foreign film is the same as sending out the latest Will Smith blockbuster. The long tail gives the firm a selection advantage (or one based on scale) that traditional stores simply cannot match. For more evidence that there is demand for the obscure stuff, consider Bollywood cinema—a term referring to films produced in India. When ranked by the number of movies produced each year, Bollywood is actually bigger than Hollywood, but in terms of U.S. demand, even the top-grossing Hindi film might open in only one or two American theaters, and few video stores carry many Bollywood DVDs. Again, we see the limits that geography and shelf space impose on traditional stores. As Anderson puts it, when it comes to traditional methods of distribution, “an audience too thinly spread is the same as no audience at all (Anderson, 2004).” While there are roughly 1.7 million South Asians living in the United States, Bollywood fans are geographically disbursed, making it difficult to offer content at a physical storefront. Fans of foreign films would often find the biggest selection at an ethnic grocery store, but even then, that wouldn’t be much. Enter Netflix. The firm has found the U.S. fans of South Asian cinema, sending out roughly one hundred thousand Bollywood DVDs a month. As geographic constraints go away, untapped markets open up! The power of Netflix can revive even well-regarded work by some of Hollywood’s biggest names. In between The Godfather and The Godfather Part II, director Francis Ford Coppola made The Conversation, a film starring Gene Hackman that, in 1975, was nominated for a Best Picture Academy Award. Coppola has called The Conversation the finest film he has ever made (Leonhardt, 2006), but it was headed for obscurity as the ever-growing pipeline of new releases pushed the film off of video store shelves. Netflix was happy to pick up The Conversation and put it in the long tail. Since then, the number of customers viewing the film has tripled, and on Netflix, this once underappreciated gem became the thirteenth most watched film from its time period. For evidence on Netflix’s power to make lucrative markets from nonblockbusters, visit the firm’s “Top 100 page.”1 You’ll see a list loaded with films that were notable for their lack of box office success. As of this writing the number one rank had been held for over five years in a row, not by a first-run mega-hit, but by the independent film Crash (an Oscar winner, but box office weakling) (Elder, 2009). Netflix has used the long tail to its advantage, crafting a business model that creates close ties with film studios. In most cases, studios earn a percentage of the subscription revenue for every disk sent out to a Netflix customer. In exchange, Netflix gets DVDs at a very low cost. The movie business is characterized by large fixed costs up front. Studio marketing budgets are concentrated on films when they first appear in theaters, and when they’re first offered on DVD. After that, studios are done promoting a film, focusing instead on their most current titles. But Netflix is able to find an audience for a film without the studios spending a dime on additional marketing. Since so many of the titles viewed on Netflix are in the long tail, revenue sharing is all gravy for the studios—additional income they would otherwise be unlikely to get. It’s a win-win for both ends of the supply chain. These supplier partnerships grant Netflix a sort of soft bargaining power that’s distinctly opposite the strong-arm price bullying that giants like Wal-Mart are often accused of. The VCR, the Real “Killer App”? Netflix’s coziness with movie studios is particularly noteworthy, given that the film industry has often viewed new technologies with a suspicion bordering on paranoia. In one of the most notorious incidents, Jack Valenti, the former head of the Motion Picture Association of American (MPAA) once lobbied the U.S. Congress to limit the sale of home video recorders, claiming, “the VCR is to the American film producer and the American public as the Boston strangler is to the woman home alone” (Bates, 2007). Not only was the statement over the top, Jack couldn’t have been more wrong. Revenue from the sale of VCR tapes would eventually surpass the take from theater box offices, and today, home video brings in about two times box office earnings.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/04%3A_Netflix-_The_Making_of_an_E-commerce_Giant_and_the_Uncertain_Future_of_Atoms_to_Bits/4.02%3A_Tech_and_Timing-_Creating_Killer_Assets.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand the shift from atoms to bits, and how this is impacting a wide range of industries. 2. Recognize the various key issues holding back streaming video models. 3. Know the methods that Netflix is using to attempt to counteract these challenges. Nicholas Negroponte, the former head of MIT’s Media Lab and founder of the One Laptop per Child effort, wrote a now-classic essay on the shift from atoms to bits. Negroponte pointed out that most media products are created as bits—digital files of ones and zeros that begin their life on a computer. Music, movies, books, and newspapers are all created using digital technology. When we buy a CD, DVD, or even a “dead tree” book or newspaper, we’re buying physical atoms that are simply a container for the bits that were created in software—a sound mixer, a video editor, or a word processor. The shift from atoms to bits is realigning nearly every media industry. Newspapers struggle as readership migrates online and once-lucrative classified ads and job listings shift to the bits-based businesses of Craigslist, Monster.com, and LinkedIn. Apple dominates music sales, selling not a single “atom” of physical CDs, while most of the atom-selling “record store” chains of a decade ago are bankrupt. Amazon has even begun delivering digital books, developing the Kindle digital reader. Who needs to kill a tree, spill ink, fill a warehouse, and roll a gas-guzzling truck to get you a book? Kindle can slurp your purchases through the air and display them on a device lighter than any college textbook. When Amazon CEO Bezos unveiled the Kindle DX at a press event at Pace University in Spring 2009, he indicated that Kindle book sales were accounting for 35 percent of sales for the two hundred and seventy-five thousand titles available for the device—a jaw-dropping impact for a device many had thought to be an expensive, niche product for gadget lovers (Penenberg, 2009). Video is already going digital, but Netflix became a profitable business by handling the atoms of DVDs. The question is, will the atoms to bits shift crush Netflix and render it as irrelevant as Hastings did Blockbuster? Or can Reed pull off yet another victory and recast his firm for the day that DVDs disappear? Concerns over the death of the DVD and the relentless arrival of new competitors are probably the main cause for Netflix’s stock volatility these past few years. Through the first half of 2010, the firm’s growth, revenue, and profit graphs all go up and to the right, but the stock has experienced wild swings as pundits have mostly guessed wrong about the firm’s imminent demise (one well-known Silicon Valley venture capitalist even referred to the firm as “an ice cube in the sun,” a statement Netflix countered with five years of record-breaking growth and profits) (Copeland, 2008). The troughs on the Netflix stock graph have proven great investment opportunities for the savvy. The firm broke all previous growth and earnings records and posted its lowest customer churn ever, even as a deep recession and the subprime crisis hammered many other firms. The firm continued to enjoy its most successful quarters as a public company, and subscriber growth rose even as DVD sales fell. But even the most bullish investor knows there’s no stopping the inevitable shift from atoms to bits, and the firm’s share price swings continue. When the DVD dies, the high-tech shipping and handling infrastructure that Netflix has relentlessly built will be rendered worthless. Reed Hastings clearly knows this, and he has a plan: “We named the company Netflix for a reason; we didn’t name it DVDs-by-mail” (Boyle, 2007). But he also prepared the public for a first-cut service that was something less than we’d expect from the long tail poster child. When speaking about the launch of the firm’s Internet video streaming offering in January 2007, Hastings said it would be “underwhelming.” The two biggest limitations of this initial service? As we’ll see below, not enough content, and figuring out how to squirt the bits to a television. • Access to Content First the content. Three years after the launch of Netflix streaming option (enabled via a “Watch Now” button next to movies that can be viewed online), only 17,000 videos were offered, just 17 percent of the firm’s long tail. And not the best 17 percent. Why so few titles? It’s not just studio reluctance or fear of piracy. There are often complicated legal issues involved in securing the digital distribution rights for all of the content that makes up a movie. Music, archival footage, and performer rights may all hold up a title from being available under “Watch Now.” The 2007 Writers Guild strike occurred largely due to negotiations over digital distribution, showing just how troublesome these issues can be. Add to that the exclusivity contracts negotiated by key channels, in particular the so-called premium television networks. Film studios release their work in a system called windowing. Content is available to a given distribution channel (in theaters, through hospitality channels like hotels and airlines, on DVD, via pay-per-view, via pay cable, then broadcast commercial TV) for a specified time window, usually under a different revenue model (ticket sales, disc sales, license fees for broadcast). Pay television channels in particular have negotiated exclusive access to content as they strive to differentiate themselves from one another. This exclusivity means that even when a title becomes available for streaming by Netflix, it may disappear when a pay TV window opens up. If HBO or Showtime has an exclusive for a film, it’s pulled from the Netflix streaming service until the exclusive pay TV time window closes. A 2008 partnership with the Starz network helped provide access to some content locked up inside pay television windows, and deals with Disney and CBS allow for streaming of current-season shows (Portnoy, 2008). But the firm still has a long way to go before the streaming tail seems comparably long when compared against its disc inventory. While studios embrace the audience-finding and revenue-sharing advantages of Netflix, they also don’t want to undercut higher-revenue early windows. Fox, Universal, and Warner have all demanded that Netflix delay sending DVDs to customers until twenty-eight days after titles go on sale. In exchange, Netflix has received guarantees that these studios will offer more content for digital streaming. There’s also the influence of the king of DVD sales: Wal-Mart. The firm accounts for about 40 percent of DVD sales—a scale that delivers a lot of the bargaining power it has used to “encourage” studios to hold content from competing windows or to limit offering digital titles at competitive pricing during the peak new release period (Grover, 2006). Apparently, Wal-Mart isn’t ready to yield ground in the shifts from atoms to bits, either. In February 2010, the retail giant spent an estimated \$100 million to buy the little-known video streaming outfit VUDU (Stone, 2010). Wal-Mart’s negotiating power with studios may help it gain special treatment for VUDU. As an example, VUDU was granted exclusive high-definition streaming rights for the hit movie Avatar, offering the title online the same day the DVD appeared for sale (Jacobson, 2010). Studios may also be wary of the increasing power Netflix has over product distribution, and as such, they may be motivated to keep rivals around. Studios have granted Blockbuster more favorable distribution terms than Netflix. In many cases, Blockbuster can now distribute DVDs the day of release instead of waiting nearly a month, as Netflix does (Birchall, 2010). Studios are likely concerned that Netflix may be getting so big that it will one day have Wal-Mart-like negotiating leverage. Supplier Power and Atoms to Bits The winner-take-all, winner-take-most dynamics of digital distribution can put suppliers at a disadvantage. If firms rely on one channel partner for a large portion of sales, that partner has an upper hand in negotiations. For years, record labels and movie studios complained that Apple’s dominance of iTunes allowed them little negotiating room in price setting. A boycott where NBC temporarily lifted TV shows from iTunes is credited with loosening Apple’s pricing policies. Similarly, when Amazon’s Kindle dominated the e-book reader market, Amazon enforced a \$9.99 price on electronic editions, even as publishers lobbied for higher rates. It wasn’t until Apple arrived with a cr e-book rival in the iPad that Amazon’s leverage was weakened to the point where publishers were allowed to set their own e-book prices (Rich & Stone, 2010). Taken together, all these factors make it clear that shifting the long tail from atoms to bits will be significantly more difficult than buying DVDs and stacking them in a remote warehouse. • But How Does It Get to the TV? The other major problem lies in getting content to the place where most consumers want to watch it: the living room TV. Netflix’s “Watch Now” button first worked only on Windows PCs. Although the service was introduced in January 2007, the months before were fueled with speculation that the firm would partner with TiVo. Just one month later, TiVo announced its partner—Amazon.com. At that point Netflix found itself up against a host of rivals that all had a path to the television: Apple had its own hardware solution in Apple TV (not to mention the iPod and iPhone for portable viewing), the cable companies delivered OnDemand through their set-top boxes, and now Amazon had TiVo. An internal team at Netflix developed a prototype set top box that Hastings himself supported offering. But most customers aren’t enthusiastic about purchasing yet another box for their set top, the consumer electronics business is brutally competitive, and selling hardware would introduce an entirely new set of inventory, engineering, marketing, distribution, and competitive complexities. The solution Netflix eventually settled on was to think beyond one hardware alternative and instead recruit others to provide a wealth of choice. The firm developed a software platform and makes this available to firms seeking to build Netflix access into their devices. Today, Netflix streaming is baked into televisions and DVD players from LG, Panasonic, Samsung, Sony, Toshiba, and Vizio, among others. It’s also available on all major video game consoles. A Netflix app for Apple’s iPad was available the day the device shipped. Even TiVo now streams Netflix. And that internally developed Netflix set-top box? The group was spun out to form Roku, an independent firm that launched their own \$99 Netflix streamer. The switch to Blu-ray movies may offer the most promise. Blu-ray players are on the fast track to commoditization. If consumer electronics firms incorporate Netflix access into their players as a way to attract more customers with an additional, differentiating feature, Hastings’s firm could end up with more living room access than either Amazon or Apple. There are 73 million households in the United States that have a DVD player and an Internet connection. Should a large portion of these homes end up with a Netflix-ready Blu-ray player, Hastings will have built himself an enviable base through which to grow the video streaming business. Disintermediation and Digital Distribution The purchase of NBC/Universal by Comcast, the largest cable television provider in the United States, has consolidated content and distribution in a single firm. The move can be described as both vertical integration (when an organization owns more than one layer of its value chain) and disintermediation (removing an organization from a firm’s distribution channel) (Gallaugher, 2002). Disintermediation in the video industry offers two potentially big benefits. First, studios don’t need to share revenue with third parties; they can keep all the money generated through new windows. Also critically important, studios keep the interface with their customers. Remember, in the digital age data is valuable; if another firm sits between a supplier and its customers, the supplier loses out on a key resource for competitive advantage. For more on the value of the data asset in maintaining and strengthening customer relationships, see Chapter 11 “The Data Asset: Databases, Business Intelligence, and Competitive Advantage”. Who’s going to win the race for delivering bits to the television is still very much an uncertain bet. The models all vary significantly. Apple’s early efforts were limited, with the firm offering only video purchases for Apple TV, but eventually moving to online “rentals” that can also play on the firm’s entire line of devices. Movie studios are now all in Apple’s camp, although the firm did temporarily lose NBC’s television content in a dispute over pricing. Amazon and Microsoft also have online rentals and purchase services, and can get their content to the television via TiVo and Xbox, respectively (yes, this makes Microsoft both a partner and a sort of competitor, a phenomenon often referred to as coopetition, or frenemies (Brandenberger & Nalebuff, 1997; Johnson, 2008). Hulu, a joint venture backed by NBC, Fox, and other networks, is free, earning money from ads that run like TV commercials. While Hulu has also received glowing reviews, the venture has lagged in offering a method to get streaming content to the television. Netflix pioneered “all-you-can-eat” subscription streaming. Anyone who has at least the \$8.99 subscription plan can view an unlimited number of video streams. And Blockbuster isn’t dead yet. It also streams over TiVo and has other offerings in the works. There’s a clear upside to the model when it shifts to streaming: it will eliminate a huge chunk of costs associated with shipping and handling. Postage represents one-third of the firm’s expenses. A round-trip DVD mailing, even at the deep discounts Netflix receives from the U.S. Postal Service, runs about eighty cents. The bandwidth and handling costs to send bits to a TV set are around a nickel (McCarthy, 2009). At some point, if postage goes away, Netflix may be in a position to offer even greater profits to its studio suppliers, and to make more money itself, too. Wrangling licensing costs presents a further challenge. Estimates peg Netflix 2009 streaming costs at about \$100 million, up 250 percent in three years. But these expenses still deliver just a fraction of the long tail. Streaming licensing deals are tricky because they’re so inconsistent even when titles are available. Rates vary, with some offered via a flat rate for unlimited streams, a per-stream rate, a rate for a given number of streams, and various permutations in between. Some vendors have been asking as much as four dollars per stream for more valuable content (Rayburn, 2009) —a fee that would quickly erase subscriber profits, making any such titles too costly to add to the firm’s library. Remember, Netflix doesn’t charge more for streaming—it’s built into the price of its flat-rate subscriptions. Any extra spending doesn’t come at the best time. The switch to Blu-ray movies means that Netflix will be forced into the costly proposition of carrying two sets of video inventory: standard and high-def. Direct profits may not be the driver. Rather, the service may be a feature that attracts new customers to the firm and helps prevent subscriber flight to rival video-on-demand efforts. The stealth arrival of a Netflix set-top box, in the form of upgraded Blu-ray players, might open even more customer acquisition opportunities to the firm. Bought a Blu-ray player? For just nine dollars per month you can get a ticket to the all-you-can-eat Netflix buffet. And more customers ready to watch content streamed by Netflix may prime the pump for studios to become more aggressive in licensing more of their content. Many TV networks and movie studios are leery of losing bargaining power to a dominant firm, having witnessed how Apple now dictates pricing terms to music labels. The goodwill Netflix has earned over the years may pay off if it can become the studios’ partner of first choice. While one day the firm will lose the investment in its warehouse infrastructure, nearly all assets have a limited lifespan. That’s why corporations depreciate assets, writing their value down over time. The reality is that the shift from atoms to bits won’t flick on like a light switch; it will be a hybrid transition that takes place over several years. If the firm can grab long-tail content, grow its customer base, and lock them in with the switching costs created by Cinematch (all big “ifs”), it just might emerge as a key player in a bits-only world. Is the hybrid strategy a dangerous straddling gambit or a clever ploy to remain dominant? Netflix really doesn’t have a choice but to try. Hastings already has a long history as one of the savviest strategic thinkers in tech. As the networks say, stay tuned! Key Takeaways • The shift from atoms to bits is impacting all media industries, particularly those relying on print, video, and music content. Content creators, middlemen, retailers, consumers, and consumer electronics firms are all impacted. • Netflix’s shift to a streaming model (from atoms to bits) is limited by access to content and in methods to get this content to televisions. • Windowing and other licensing issues limit available content, and inconsistencies in licensing rates make profitable content acquisitions a challenge. Questions and Exercises 1. What do you believe are the most significant long-term threats to Netflix? How is Netflix trying to address these threats? What obstacles does the firm face in dealing with these threats? 2. Who are the rivals to Netflix’s “Watch Now” effort? Do any of these firms have advantages that Netflix lacks? What are these advantages? 3. Why would a manufacturer of DVD players be motivated to offer the Netflix “Watch Now” feature in its products? 4. Describe various revenue models available as video content shifts from atoms to bits. What are the advantages and disadvantages to each—for consumers, for studios, for middlemen like television networks and Netflix? 5. Wal-Mart backed out of the DVD-by-mail industry. Why does the firm continue to have so much influence with the major film studios? What strategic asset is Wal-Mart leveraging? 6. Investigate the firm Red Box. Do you think they are a legitimate threat to Netflix? Why or why not?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/04%3A_Netflix-_The_Making_of_an_E-commerce_Giant_and_the_Uncertain_Future_of_Atoms_to_Bits/4.03%3A_From_Atoms_to_Bits-_Opportunity_or_Threat.txt
Learning Objectives After studying this section you should be able to do the following: 1. Define Moore’s Law and understand the approximate rate of advancement for other technologies, including magnetic storage (disk drives) and telecommunications (fiber-optic transmission). 2. Understand how the price elasticity associated with faster and cheaper technologies opens new markets, creates new opportunities for firms and society, and can catalyze industry disruption. 3. Recognize and define various terms for measuring data capacity. 4. Consider the managerial implication of faster and cheaper computing on areas such as strategic planning, inventory, and accounting. Faster and cheaper—those two words have driven the computer industry for decades, and the rest of the economy has been along for the ride. Today it’s tough to imagine a single industry not impacted by more powerful, less expensive computing. Faster and cheaper puts mobile phones in the hands of peasant farmers, puts a free video game in your Happy Meal, and drives the drug discovery that may very well extend your life. • Some Definitions This phenomenon of “faster, cheaper” computing is often referred to as Moore’s Law, after Intel cofounder, Gordon Moore. Moore didn’t show up one day, stance wide, hands on hips, and declare “behold my law,” but he did write a four-page paper for Electronics Magazine in which he described how the process of chip making enabled more powerful chips to be manufactured at cheaper prices (Moore, 1965). Moore’s friend, legendary chip entrepreneur and CalTech professor Carver Mead, later coined the “Moore’s Law” moniker. That name sounded snappy, plus as one of the founders of Intel, Moore had enough geek cred for the name to stick. Moore’s original paper offered language only a chip designer would love, so we’ll rely on the more popular definition: chip performance per dollar doubles every eighteen months (Moore’s original paper assumed two years, but many sources today refer to the eighteen-month figure, so we’ll stick with that). Moore’s Law applies to chips—broadly speaking, to processors, or the electronics stuff that’s made out of silicon1. The microprocessor is the brain of a computing device. It’s the part of the computer that executes the instructions of a computer program, allowing it to run a Web browser, word processor, video game, or virus. For processors, Moore’s Law means that next generation chips should be twice as fast in eighteen months, but cost the same as today’s models (or from another perspective, in a year and a half, chips that are same speed as today’s models should be available for half the price). Random-access memory (RAM) is chip-based memory. The RAM inside your personal computer is volatile memory, meaning that when the power goes out, all is lost that wasn’t saved to nonvolatile memory (i.e., a more permanent storage media like a hard disk or flash memory). Think of RAM as temporary storage that provides fast access for executing computer programs and files. When you “load” or “launch” a program, it usually moves from your hard drive to those RAM chips, where it can be more quickly executed by the processor. Cameras, MP3 players, USB drives, and mobile phones often use flash memory (sometimes called flash RAM). It’s not as fast as the RAM used in most traditional PCs, but holds data even when the power is off (so flash memory is also nonvolatile memory). You can think of flash memory as the chip-based equivalent of a hard drive. In fact, flash memory prices are falling so rapidly that several manufactures including Apple and the One Laptop per Child initiative (see the “Tech for the Poor” sidebar later in this section) have begun offering chip-based, nonvolatile memory as an alternative to laptop hard drives. The big advantage? Chips are solid state electronics (meaning no moving parts), so they’re less likely to fail, and they draw less power. The solid state advantage also means that chip-based MP3 players like the iPod nano make better jogging companions than hard drive players, which can skip if jostled. For RAM chips and flash memory, Moore’s Law means that in eighteen months you’ll pay the same price as today for twice as much storage. Computer chips are sometimes also referred to as semiconductors (a substance such as silicon dioxide used inside most computer chips that is capable of enabling as well as inhibiting the flow of electricity). So if someone refers to the semiconductor industry, they’re talking about the chip business2. Strictly speaking, Moore’s Law does not apply to other technology components. But other computing components are also seeing their price versus performance curves skyrocket exponentially. Data storage doubles every twelve months. Networking speed is on a tear, too. With an equipment change at the ends of the cables, the amount of data that can be squirted over an optical fiber line can double every nine months3. These numbers should be taken as rough approximations and shouldn’t be expected to be strictly precise over time. However, they are useful as rough guides regarding future computing price/performance trends. Despite any fluctuation, it’s clear that the price/performance curve for many technologies is exponential, offering astonishing improvement over time. Figure 5.1 Advancing Rates of Technology (Silicon, Storage, Telecom) Adopted from Shareholder Presentation by Jeff Bezos, Amazon.com, 2006.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/05%3A_Moores_Law-_Fast_Cheap_Computing_and_What_It_Means_for_the_Manager/5.01%3A_Introduction.txt
Learning Objectives After studying this section you should be able to do the following: 1. Describe why Moore’s Law continues to advance and discuss the physical limitations of this advancement. 2. Name and describe various technologies that may extend the life of Moore’s Law. 3. Discuss the limitations of each of these approaches. Moore simply observed that we’re getting better over time at squeezing more stuff into tinier spaces. Moore’s Law is possible because the distance between the pathways inside silicon chips gets smaller with each successive generation. While chip plants (semiconductor fabrication facilities, or fabs) are incredibly expensive to build, each new generation of fabs can crank out more chips per silicon wafer. And since the pathways are closer together, electrons travel shorter distances. If electronics now travel half the distance to make a calculation, that means the chip is twice as fast. But the shrinking can’t go on forever, and we’re already starting to see three interrelated forces—size, heat, and power—threatening to slow down the Moore’s Law gravy train. When you make processors smaller, the more tightly packed electrons will heat up a chip—so much so that unless today’s most powerful chips are cooled down, they will melt inside their packaging. To keep the fastest computers cool, most PCs, laptops, and video game consoles need fans, and most corporate data centers have elaborate and expensive air conditioning and venting systems to prevent a meltdown. A trip through the Facebook data center during its recent rise would show that the firm was a “hot” start-up in more ways than one. The firm’s servers ran so hot that the Plexiglas sides of the firm’s server racks were warped and melting (McGirt, 2007)! The need to cool modern data centers draws a lot of power and that costs a lot of money. The chief eco officer at Sun Microsystems has claimed that computers draw 4 to 5 percent of the world’s power. Google’s chief technology officer has said that the firm spends more to power its servers than the cost of the servers themselves (Kirkpatrick, 2007). Microsoft, Yahoo! and Google have all built massive data centers in the Pacific Northwest, away from their corporate headquarters, specifically choosing these locations for access to cheap hydroelectric power. Google’s location in The Dalles, Oregon, is charged a cost per kilowatt hour of two cents by the local power provider, less than one-fifth of the eleven-cent rate the firm pays in Silicon Valley (Mehta, 2006)1. This difference means big savings for a firm that runs more than a million servers. And while these powerful shrinking chips are getting hotter and more costly to cool, it’s also important to realize that chips can’t get smaller forever. At some point Moore’s Law will run into the unyielding laws of nature. While we’re not certain where these limits are, chip pathways certainly can’t be shorter than a single molecule, and the actual physical limit is likely larger than that. Get too small and a phenomenon known as quantum tunneling kicks in, and electrons start to slide off their paths. Yikes! • Buying Time One way to overcome this problem is with multicore microprocessors, made by putting two or more lower power processor cores (think of a core as the calculating part of a microprocessor) on a single chip. Philip Emma, IBM’s Manager of Systems Technology and Microarchitecture, offers an analogy. Think of the traditional fast, hot, single-core processors as a three hundred-pound lineman, and a dual-core processor as two 160-pound guys. Says Emma, “A 300-pound lineman can generate a lot of power, but two 160-pound guys can do the same work with less overall effort” (Ashton, 2005). For many applications, the multicore chips will outperform a single speedy chip, while running cooler and drawing less power. Multicore processors are now mainstream. Today, most PCs and laptops sold have at least a two-core (dual-core) processor. The Microsoft Xbox 360 has three cores. The PlayStation 3 includes the so-called cell processor developed by Sony, IBM, and Toshiba that runs nine cores. By 2010, Intel began shipping PC processors with eight cores, while AMD introduced a twelve-core chip. Intel has even demonstrated chips with upwards of fifty cores. Multicore processors can run older software written for single-brain chips. But they usually do this by using only one core at a time. To reuse the metaphor above, this is like having one of our 160-pound workers lift away, while the other one stands around watching. Multicore operating systems can help achieve some performance gains. Versions of Windows or the Mac OS that are aware of multicore processors can assign one program to run on one core, while a second application is assigned to the next core. But in order to take full advantage of multicore chips, applications need to be rewritten to split up tasks so that smaller portions of a problem are executed simultaneously inside each core. Writing code for this “divide and conquer” approach is not trivial. In fact, developing software for multicore systems is described by Shahrokh Daijavad, software lead for next-generation computing systems at IBM, as “one of the hardest things you learn in computer science” (Ashton, 2005). Microsoft’s chief research and strategy officer has called coding for these chips “the most conceptually different [change] in the history of modern computing” (Copeland, 2008). Despite this challenge, some of the most aggressive adaptors of multicore chips have been video game console manufacturers. Video game applications are particularly well-suited for multiple cores since, for example, one core might be used to render the background, another to draw objects, another for the “physics engine” that moves the objects around, and yet another to handle Internet communications for multiplayer games. Another approach to breathing life into Moore’s Law is referred to as stacked or three-dimensional semiconductors. In this approach, engineers slice a flat chip into pieces, then reconnect the pieces vertically, making a sort of “silicon sandwich.” The chips are both faster and cooler since electrons travel shorter distances. What was once an end-to-end trip on a conventional chip might just be a tiny movement up or down on a stacked chip. But stacked chips present their own challenges. In the same way that a skyscraper is more difficult and costly to design and build than a ranch house, 3-D semiconductors are tougher to design and manufacture. IBM has developed stacked chips for mobile phones, claiming the technique improves power efficiency by up to 40 percent. HP Labs is using a technology called memristors, or memory resistors, to improve on conventional transistors and speed the transition to 3-D chips, yielding significant improvement over 2-D offerings (Markoff, 2010). Quantum Leaps, Chicken Feathers, and the Indium Gallium Arsenide Valley? Think about it—the triple threat of size, heat, and power means that Moore’s Law, perhaps the greatest economic gravy train in history, will likely come to a grinding halt in your lifetime. Multicore and 3-D semiconductors are here today, but what else is happening to help stave off the death of Moore’s Law? Every once in a while a material breakthrough comes along that improves chip performance. A few years back researchers discovered that replacing a chip’s aluminum components with copper could increase speeds up to 30 percent. Now scientists are concentrating on improving the very semiconductor material that chips are made of. While the silicon used in chips is wonderfully abundant (it has pretty much the same chemistry found in sand), researchers are investigating other materials that might allow for chips with even tighter component densities. Researchers have demonstrated that chips made with supergeeky-sounding semiconductor materials such as indium gallium arsenide, germanium, and bismuth telluride can run faster and require less wattage than their silicon counterparts (Chen, et. al., 2009; Greene, 2007; Cane, 2006). Perhaps even more exotic (and downright bizarre), researchers at the University of Delaware have experimented with a faster-than-silicon material derived from chicken feathers! Hyperefficient chips of the future may also be made out of carbon nanotubes, once the technology to assemble the tiny structures becomes commercially viable. Other designs move away from electricity over silicon. Optical computing, where signals are sent via light rather than electricity, promises to be faster than conventional chips, if lasers can be mass produced in miniature (silicon laser experiments show promise). Others are experimenting by crafting computing components using biological material (think a DNA-based storage device). One yet-to-be-proven technology that could blow the lid off what’s possible today is quantum computing. Conventional computing stores data as a combination of bits, where a bit is either a one or a zero. Quantum computers, leveraging principles of quantum physics, employ qubits that can be both one and zero at the same time. Add a bit to a conventional computer’s memory and you double its capacity. Add a bit to a quantum computer and its capacity increases exponentially. For comparison, consider that a computer model of serotonin, a molecule vital to regulating the human central nervous system, would require 1094 bytes of information. Unfortunately there’s not enough matter in the universe to build a computer that big. But modeling a serotonin molecule using quantum computing would take just 424 qubits (Kaihla, 2004). Some speculate that quantum computers could one day allow pharmaceutical companies to create hyperdetailed representations of the human body that reveal drug side effects before they’re even tested on humans. Quantum computing might also accurately predict the weather months in advance or offer unbreakable computer security. Ever have trouble placing a name with a face? A quantum computer linked to a camera (in your sunglasses, for example) could recognize the faces of anyone you’ve met and give you a heads-up to their name and background (Schwartz, et. al., 2006). Opportunities abound. Of course, before quantum computing can be commercialized, researchers need to harness the freaky properties of quantum physics wherein your answer may reside in another universe, or could disappear if observed (Einstein himself referred to certain behaviors in quantum physics as “spooky action at a distance”). Pioneers in quantum computing include IBM, HP, NEC, and a Canadian start-up named D-Wave. If or when quantum computing becomes a reality is still unknown, but the promise exists that while Moore’s Law may run into limits imposed by Mother Nature, a new way of computing may blow past anything we can do with silicon, continuing to make possible the once impossible. Key Takeaways • As chips get smaller and more powerful, they get hotter and present power-management challenges. And at some, point Moore’s Law will stop because we will no longer be able to shrink the spaces between components on a chip. • Multicore chips use two or more low-power calculating “cores” to work together in unison, but to take optimal advantage of multicore chips, software must be rewritten to “divide” a task among multiple cores. • 3-D or stackable semiconductors can make chips faster and run cooler by shortening distances between components, but these chips are harder to design and manufacture. • New materials may extend the life of Moore’s Law, allowing chips to get smaller, still. Entirely new methods for calculating, such as quantum computing, may also dramatically increase computing capabilities far beyond what is available today. Questions and Exercises 1. What three interrelated forces threaten to slow the advancement of Moore’s Law? 2. Which commercial solutions, described in the section above, are currently being used to counteract the forces mentioned above? How do these solutions work? What are the limitations of each? 3. Will multicore chips run software designed for single-core processors? 4. As chips grow smaller they generate increasing amounts of heat that needs to be dissipated. Why is keeping systems cool such a challenge? What are the implications for a firm like Yahoo! or Google? For a firm like Apple or Dell? 5. What are some of the materials that may replace the silicon that current chips are made of? 6. What kinds of problems might be solved if the promise of quantum computing is achieved? How might individuals and organizations leverage quantum computing? What sorts of challenges could arise from the widespread availability of such powerful computing technology?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/05%3A_Moores_Law-_Fast_Cheap_Computing_and_What_It_Means_for_the_Manager/5.02%3A_The_Death_of_Moores_Law.txt
Learning Objectives After studying this section you should be able to do the following: 1. Give examples of the business use of supercomputing and grid computing. 2. Describe grid computing and discuss how grids transform the economics of supercomputing. 3. Understand the characteristics of problems that are and are not well suited for supercomputing and grid computing. As Moore’s Law makes possible the once impossible, businesses have begun to demand access to the world’s most powerful computing technology. Supercomputers are computers that are among the fastest of any in the world at the time of their introduction1. Supercomputing was once the domain of governments and high-end research labs, performing tasks such as simulating the explosion of nuclear devices, or analyzing large-scale weather and climate phenomena. But it turns out with a bit of tweaking, the algorithms used in this work are profoundly useful to business. Consider perhaps the world’s most well-known supercomputer, IBM’s Deep Blue, the machine that rather controversially beat chess champion Garry Kasparov. While there is not a burning need for chess-playing computers in the world’s corporations, it turns out that the computing algorithms to choose the best among multiple chess moves are similar to the math behind choosing the best combination of airline flights. One of the first customers of Deep Blue technologies was United Airlines, which gained an ability to examine three hundred and fifty thousand flight path combinations for its scheduling systems—a figure well ahead of the previous limit of three thousand. Estimated savings through better yield management? Over \$50 million! Finance found uses, too. An early adopter was CIBC (the Canadian Imperial Bank of Commerce), one of the largest banks in North America. Each morning CIBC uses a supercomputer to run its portfolio through Monte Carlo simulations that aren’t all that different from the math used to simulate nuclear explosions. An early adopter of the technology, at the time of deployment, CIBC was the only bank that international regulators allowed to calculate its own capital needs rather than use boilerplate ratios. That cut capital on hand by hundreds of millions of dollars, a substantial percentage of the bank’s capital, saving millions a year in funding costs. Also noteworthy: the supercomputer-enabled, risk-savvy CIBC was relatively unscathed by the subprime crisis. Modern supercomputing is typically done via a technique called massively parallel processing (computers designed with many microprocessors that work together, simultaneously, to solve problems). The fastest of these supercomputers are built using hundreds of microprocessors, all programmed to work in unison as one big brain. While supercomputers use special electronics and software to handle the massive load, the processors themselves are often of the off-the-shelf variety that you’d find in a typical PC. Virginia Tech created what at the time was the world’s third-fastest supercomputer by using chips from 1,100 Macintosh computers lashed together with off-the-shelf networking components. The total cost of the system was just \$5.2 million, far less than the typical cost for such burly hardware. The Air Force recently issued a request-for-proposal to purchase 2,200 PlayStation 3 systems in hopes of crafting a supercheap, superpowerful machine using off-the-shelf parts. Another technology, known as grid computing, is further transforming the economics of supercomputing. With grid computing, firms place special software on its existing PCs or servers that enables these computers to work together on a common problem. Large organizations may have thousands of PCs, but they’re not necessarily being used all the time, or at full capacity. With grid software installed on them, these idle devices can be marshaled to attack portions of a complex task as if they collectively were one massively parallel supercomputer. This technique radically changes the economics of high-performance computing. BusinessWeek reports that while a middle-of-the-road supercomputer could run as much as \$30 million, grid computing software and services to perform comparable tasks can cost as little as twenty-five thousand dollars, assuming an organization already has PCs and servers in place. An early pioneer in grid computing is the biotech firm Monsanto. Monsanto enlists computers to explore ways to manipulate genes to create crop strains that are resistant to cold, drought, bugs, pesticides, or that are more nutritious. Previously with even the largest computer Monsanto had in-house, gene analysis was taking six weeks and the firm was able to analyze only ten to fifty genes a year. But by leveraging grid computing, Monsanto has reduced gene analysis to less than a day. The fiftyfold time savings now lets the firm consider thousands of genetic combinations in a year (Schwartz, et. al., 2006). Lower R&D time means faster time to market—critical to both the firm and its customers. Grids are now everywhere. Movie studios use them to create special effects and animated films. Proctor & Gamble has used grids to redesign the manufacturing process for Pringles potato chips. GM and Ford use grids to simulate crash tests, saving millions in junked cars and speeding time to market. Pratt and Whitney test aircraft engine designs on a grid. And biotech firms including Aventis, GlaxoSmithKline, and Pfizer push their research through a quicker pipeline by harnessing grid power. JP Morgan Chase even launched a grid effort that mimics CIBC’s supercomputer, but at a fraction of the latter’s cost. By the second year of operation, the JPMorgan Chase grid was saving the firm \$5 million per year. You can join a grid, too. SETI@Home turns your computer screen saver into a method to help “search for extraterrestrial intelligence,” analyzing data from the Arecibo radio telescope system in Puerto Rico (no E.T. spotted yet). FightAids@Home will enlist your PC to explore AIDS treatments. And Folding@Home is an effort by Stanford researchers to understanding the science of protein-folding within diseases such as Alzheimer’s, cancer, and cystic fibrosis. A version of Folding@Home software for the PlayStation 3 had enlisted over half a million consoles within months of release. Having access to these free resources is an enormous advantage for researchers. Says the director of Folding@Home, “Even if we were given all of the NSF supercomputing centers combined for a couple of months, that is still fewer resources than we have now” (Johnson, 2002). Multicore, massively parallel, and grid computing are all related in that each attempts to lash together multiple computing devices so that they can work together to solve problems. Think of multicore chips as having several processors in a single chip. Think of massively parallel supercomputers as having several chips in one computer, and think of grid computing as using existing computers to work together on a single task (essentially a computer made up of multiple computers). While these technologies offer great promise, they’re all subject to the same limitation: software must be written to divide existing problems into smaller pieces that can be handled by each core, processor, or computer, respectively. Some problems, such as simulations, are easy to split up, but for problems that are linear (where, for example, step two can’t be started until the results from step one are known), the multiple-brain approach doesn’t offer much help. Massive clusters of computers running software that allows them to operate as a unified service also enable new service-based computing models, such as software as a service (SaaS) and cloud computing. In these models, organizations replace traditional software and hardware that they would run in-house with services that are delivered online. Google, Microsoft, Salesforce.com, and Amazon are among the firms that have sunk billions into these Moore’s Law–enabled server farms, creating entirely new businesses that promise to radically redraw the software and hardware landscape while bringing gargantuan computing power to the little guy. (See Chapter 10 “Software in Flux: Partly Cloudy and Sometimes Free”.) Moore’s Law will likely hit its physical limit in your lifetime, but no one really knows if this “Moore’s Wall” is a decade away or more. What lies ahead is anyone’s guess. Some technologies, such as still-experimental quantum computing, could make computers that are more powerful than all the world’s conventional computers combined. Think strategically—new waves of innovation might soon be shouting “surf’s up!” Key Takeaways • Most modern supercomputers use massive sets of microprocessors working in parallel. • The microprocessors used in most modern supercomputers are often the same commodity chips that can be found in conventional PCs and servers. • Moore’s Law means that businesses as diverse as financial services firms, industrial manufacturers, consumer goods firms, and film studios can now afford access to supercomputers. • Grid computing software uses existing computer hardware to work together and mimic a massively parallel supercomputer. Using existing hardware for a grid can save a firm the millions of dollars it might otherwise cost to buy a conventional supercomputer, further bringing massive computing capabilities to organizations that would otherwise never benefit from this kind of power. • Massively parallel computing also enables the vast server farms that power online businesses like Google and Facebook, and which create new computing models, like software as a service (SaaS) and cloud computing. • The characteristics of problems best suited for solving via multicore systems, parallel supercomputers, or grid computers are those that can be divided up so that multiple calculating components can simultaneously work on a portion of the problem. Problems that are linear—where one part must be solved before moving to the next and the next—may have difficulty benefiting from these kinds of “divide and conquer” computing. Fortunately many problems such as financial risk modeling, animation, manufacturing simulation, and gene analysis are all suited for parallel systems. Questions and Exercises 1. What is the difference between supercomputing and grid computing? How is each phenomenon empowered by Moore’s Law? 2. How does grid computing change the economics of supercomputing? 3. Which businesses are using supercomputing and grid computing? Describe these uses and the advantages they offer their adopting firms. Are they a source of competitive advantage? Why or why not? 4. What are the characteristics of problems that are most easily solved using the types of parallel computing found in grids and modern day supercomputers? What are the characteristics of the sorts of problems not well suited for this type of computing? 5. Visit the SETI@Home Web site (seti.ssl.berkeley.edu). What is the purpose of the SETI@Home project? How do you participate? Is there any possible danger to your computer if you choose to participate? (Read their rules and policies.) 6. Search online to identify the five fastest supercomputers currently in operation. Who sponsors these machines? What are they used for? How many processors do they have? 7. What is “Moore’s Wall”? 8. What is the advantage of using grid computing to simulate an automobile crash test as opposed to actually staging a crash? 1A list of the current supercomputer performance champs can be found at http://www.top500.org.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/05%3A_Moores_Law-_Fast_Cheap_Computing_and_What_It_Means_for_the_Manager/5.03%3A_Bringing_Brains_Together-_Supercomputing_and_Grid_Computing.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand the magnitude of the environmental issues caused by rapidly obsolete, faster and cheaper computing. 2. Explain the limitations of approaches attempting to tackle e-waste. 3. Understand the risks firms are exposed to when not fully considering the lifecycle of the products they sell or consume. 4. Ask questions that expose concerning ethical issues in a firm or partner’s products and processes, and that help the manager behave more responsibly. We should celebrate the great bounty Moore’s Law and the tech industry bestow on our lives. Costs fall, workers become more productive, innovations flourish, and we gorge at a buffet of digital entertainment that includes music, movies, and games. But there is a dark side to this faster and cheaper advancement. A PC has an expected lifetime of three to five years. A cell phone? Two years or less. Rapid obsolescence means the creation of ever-growing mountains of discarded tech junk, known as electronic waste or e-waste. According to the U.S. Environmental Protection Agency (EPA), in 2007 the United States alone generated over 2.5 million tons of e-waste1, and the results aren’t pretty. Consumer electronics and computing equipment can be a toxic cocktail that includes cadmium, mercury, lead, and other hazardous materials. Once called the “effluent of the affluent,” e-waste will only increase with the rise of living standards worldwide. The quick answer would be to recycle this stuff. Not only does e-waste contain mainstream recyclable materials we’re all familiar with, like plastics and aluminum, it also contains small bits of increasingly valuable metals such as silver, platinum, and copper. In fact, there’s more gold in one pound of discarded tech equipment than in one pound of mined ore (Kovessy, 2008). But as the sordid record of e-waste management shows, there’s often a disconnect between consumers and managers who want to do good and those efforts that are actually doing good. The complexities of the modern value chain, the vagaries of international law, and the nefarious actions of those willing to put profits above principle show how difficult addressing this problem will be. The process of separating out the densely packed materials inside tech products so that the value in e-waste can be effectively harvested is extremely labor intensive, more akin to reverse manufacturing than any sort of curbside recycling efforts. Sending e-waste abroad can be ten times cheaper than dealing with it at home (Bodeen, 2007), so it’s not surprising that up to 80 percent of the material dropped off for recycling is eventually exported (Royte, 2006). Much of this waste ends up in China, South Asia, or sub-Saharan Africa, where it is processed in dreadful conditions. Consider the example of Guiyu, China, a region whose poisoning has been extensively chronicled by organizations such as the Silicon Valley Toxics Coalition, the Basel Action Network (BAN), and Greenpeace. Workers in and around Guiyu toil without protective equipment, breathing clouds of toxins generated as they burn the plastic skins off of wires to get at the copper inside. Others use buckets, pots, or wok-like pans (in many cases the same implements used for cooking) to sluice components in acid baths to release precious metals—recovery processes that create even more toxins. Waste sludge and the carcasses of what’s left over are most often dumped in nearby fields and streams. Water samples taken in the region showed lead and heavy metal contamination levels some four hundred to six hundred times greater than what international standards deem safe (Grossman, 2006). The area is so polluted that drinking water must be trucked in from eighteen miles away. Pregnancies are six times more likely to end in miscarriage, and 70 percent of the kids in the region have too much lead in their blood2. Figure 5.5 Photos from Guiyu, China (Biggs, 2008) Russ Allison Loar – Junk Mountain CC BY-NC-ND 2.0.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/05%3A_Moores_Law-_Fast_Cheap_Computing_and_What_It_Means_for_the_Manager/5.04%3A_E-waste-_The_Dark_Side_of_Moores_Law.txt
Learning Objectives After studying this section you should be able to do the following: 1. Define network effects. 2. Recognize products and services that are subject to network effects. 3. Understand the factors that add value to products and services subject to network effects. Network effects are sometimes referred to as “Metcalfe’s Law” or “Network Externalities.” But don’t let the dull names fool you—this concept is rocket fuel for technology firms. Bill Gates leveraged network effects to turn Windows and Office into virtual monopolies and in the process became the wealthiest man in America. Mark Zuckerberg of Facebook, Pierre Omidyar of eBay, Caterina Fake and Stewart Butterfield of Flickr, Kevin Rose of Digg, Evan Williams and Biz Stone of Twitter, Chris DeWolfe and Tom Anderson—the MySpace guys—all of these entrepreneurs have built massive user bases by leveraging the concept. When network effects are present, the value of a product or service increases as the number of users grows. Simply, more users = more value. Of course, most products aren’t subject to network effects—you probably don’t care if someone wears the same socks, uses the same pancake syrup, or buys the same trash bags as you. But when network effects are present they’re among the most important reasons you’ll pick one product or service over another. You may care very much, for example, if others are part of your social network, if your video game console is popular, if the Wikipedia article you’re referencing has had prior readers. And all those folks who bought HD DVD players sure were bummed when the rest of the world declared Blu-ray the winner. In each of these examples, network effects are at work. Not That Kind of Network The term “network” sometimes stumps people when first learning about network effects. In this context, a network doesn’t refer to the physical wires or wireless systems that connect pieces of electronics. It just refers to a common user base that is able to communicate and share with one another. So Facebook users make up a network. So do owners of Blu-ray players, traders that buy and sell stock over the NASDAQ, or the sum total of hardware and outlets that support the BS 1363 electrical standard. Key Takeaway • Network effects are among the most powerful strategic resources that can be created by technology-based innovation. Many category-dominating organizations and technologies, including Microsoft, Apple, NASDAQ, eBay, Facebook, and Visa, owe their success to network effects. Network effects are also behind the establishment of most standards, including Blu-ray, Wi-Fi, and Bluetooth. Questions and Exercises 1. What are network effects? What are the other names for this concept? 2. List several products or services subject to network effects. What factors do you believe helped each of these efforts achieve dominance? 3. Which firm do you suspect has stronger end-user network effects: Google’s online search tool or Microsoft’s Windows operating system? Why? 4. Network effects are often associated with technology, but tech isn’t a prerequisite for the existence of network effects. Name a product, service, or phenomenon that is not related to information technology that still dominates due to network effects. 6.02: Wheres All That Value Come From Learning Objectives After studying this section you should be able to do the following: 1. Identify the three primary sources of value for network effects. 2. Recognize factors that contribute to the staying power and complementary benefits of a product or service subject to network effects. 3. Understand how firms like Microsoft and Apple each benefit from strong network effects. The value derived from network effects comes from three sources: exchange, staying power, and complementary benefits. • Exchange Facebook for one person isn’t much fun, and the first guy in the world with a fax machine didn’t have much more than a paperweight. But as each new Facebook friend or fax user comes online, a network becomes more valuable because its users can potentially communicate with more people. These examples show the importance of exchange in creating value. Every product or service subject to network effects fosters some kind of exchange. For firms leveraging technology, this might include anything you can represent in the ones and zeros of digital storage, such as movies, music, money, video games, and computer programs. And just about any standard that allows things to plug into one another, interconnect, or otherwise communicate will live or die based on its ability to snare network effects. Exercise: Graph It Some people refer to network effects by the name Metcalfe’s Law. It got this name when, toward the start of the dot-com boom, Bob Metcalfe (the inventor of the Ethernet networking standard) wrote a column in InfoWorld magazine stating that the value of a network equals its number of users squared. What do you think of this formula? Graph the law with the vertical axis labeled “value” and the horizontal axis labeled “users.” Do you think the graph is an accurate representation of what’s happening in network effects? If so, why? If not, what do you think the graph really looks like?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/06%3A_Understanding_Network_Effects/6.01%3A_Introduction.txt
Learning Objectives After studying this section you should be able to do the following: 1. Recognize and distinguish between one-sided and two-sided markets. 2. Understand same-side and cross-side exchange benefits. • Understanding Network Structure To understand the key sources of network value, it’s important to recognize the structure of the network. Some networks derive most of their value from a single class of users. An example of this kind of network is instant messaging (IM). While there might be some add-ons for the most popular IM tools, they don’t influence most users’ choice of an IM system. You pretty much choose one IM tool over another based on how many of your contacts you can reach. Economists would call IM a one-sided market (a market that derives most of its value from a single class of users), and the network effects derived from IM users attracting more IM users as being same-side exchange benefits (benefits derived by interaction among members of a single class of participant). But some markets are comprised of two distinct categories of network participant. Consider video games. People buy a video game console largely based on the number of really great games available for the system. Software developers write games based on their ability to reach the greatest number of paying customers, so they’re most likely to write for the most popular consoles first. Economists would call this kind of network a two-sided market (network markets comprised of two distinct categories of participant, both of which that are needed to deliver value for the network to work). When an increase in the number of users on one side of the market (console owners, for example) creates a rise in the other side (software developers), that’s called a cross-side exchange benefit. The Positive Feedback Loop of Network Effects IM is considered a one-sided market, where the value-creating, positive-feedback loop of network effects comes mostly from same-side benefits from a single group (IM members who attract other IM members who want to communicate with them). Video game consoles, however, are considered a two-sided network, where significant benefits come from two distinct classes of users that add value from cross-side benefits by attracting their opposite group. In the game console market, more users of a console attract more developers who write more software for that console, and that attracts more users. Game availability is the main reason the Sony PlayStation 2 dominated over the original Xbox. And app availability is one of the most significant advantages the iPhone offers over competitive hardware. It is possible that a network may have both same-side and cross-side benefits. Xbox 360 benefits from cross-side benefits in that more users of that console attract more developers writing more software titles and vice versa. However, the Xbox Live network that allows users to play against each other has same-side benefits. If your buddies use Xbox Live and you want to play against them, you’re more likely to buy an Xbox. Key Takeaways • In one-sided markets, users gain benefits from interacting with a similar category of users (think instant messaging, where everyone can send and receive messages to one another). • In two-sided markets, users gain benefits from interacting with a separate, complementary class of users (e.g., in the video game industry console owners are attracted to platforms with the most games, while innovative developers are attracted to platforms that have the most users). Questions and Exercises 1. What is the difference between same-side exchange benefits and cross-side exchange benefits? 2. What is the difference between a one-sided market and a two-sided market? 3. Give examples of one-sided and two-sided markets. 4. Identify examples of two-sided markets where both sides pay for a product or service. Identify examples where only one side pays. What factors determine who should pay? Does paying have implications for the establishment and growth of a network effect? What might a firm do to encourage early network growth? 5. The Apple iPhone Developer Program provides developers access to the App Store where they can distribute their free or commercial applications to millions of iPhone and iPod touch customers. Would the iPhone market be considered a one or two-sided market?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/06%3A_Understanding_Network_Effects/6.03%3A_One-Sided_or_Two-Sided_Markets.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand how competition in markets where network effects are present differ from competition in traditional markets. 2. Understand the reasons why it is so difficult for late-moving, incompatible rivals to compete in markets where a dominant, proprietary standard is present. When network effects play a starring role, competition in an industry can be fundamentally different than in conventional, nonnetwork industries. First, network markets experience early, fierce competition. The positive-feedback loop inherent in network effects—where the biggest networks become even bigger—causes this. Firms are very aggressive in the early stages of these industries because once a leader becomes clear, bandwagons form, and new adopters begin to overwhelmingly favor the leading product over rivals, tipping the market in favor of one dominant firm or standard. This tipping can be remarkably swift. Once the majority of major studios and retailers began to back Blu-ray over HD DVD, the latter effort folded within weeks. These markets are also often winner-take-all or winner-take-most, exhibiting monopolistic tendencies where one firm dominates all rivals. Look at all of the examples listed so far—in nearly every case the dominant player has a market share well ahead of all competitors. When, during the U.S. Microsoft antitrust trial, Judge Thomas Penfield Jackson declared Microsoft to be a monopoly (a market where there are many buyers but only one dominant seller), the collective response should have been “of course.” Why? The natural state of a market where network effects are present (and this includes operating systems and Office software) is for there to be one major player. Since bigger networks offer more value, they can charge customers more. Firms with a commanding network effects advantage may also enjoy substantial bargaining power over partners. For example, Apple, which controls over 75 percent of digital music sales, for years was able to dictate song pricing, despite the tremendous protests of the record labels (Barnes, 2007). In fact, Apple’s stranglehold was so strong that it leveraged bargaining power even though the “Big Four” record labels (Universal, Sony, EMI, and Warner) were themselves an oligopoly (a market dominated by a small number of powerful sellers) that together provide over 85 percent of music sold in the United States. Finally, it’s important to note that the best product or service doesn’t always win. PlayStation 2 dominated the video console market over the original Xbox, despite the fact that nearly every review claimed the Xbox was hands-down a more technically superior machine. Why were users willing to choose an inferior product (PS2) over a superior one (Xbox)? The power of network effects! PS2 had more users, which attracted more developers offering more games. Figure 6.1 Battling a leader with network effects is tough1. This last note is a critical point to any newcomer wishing to attack an established rival. Winning customers away from a dominant player in a network industry isn’t as easy as offering a product or service that is better. Any product that is incompatible with the dominant network has to exceed the value of the technical features of the leading player, plus (since the newcomer likely starts without any users or third-party product complements) the value of the incumbent’s exchange, switching cost, and complementary product benefit (see Figure 6.1). And the incumbent must not be able to easily copy any of the newcomer’s valuable new innovations; otherwise the dominant firm will quickly match any valuable improvements made by rivals. As such, technological leapfrogging, or competing by offering a superior generation of technology, can be really tough (Schilling, 2003). Is This Good for Innovation? Critics of firms that leverage proprietary standards for market dominance often complain that network effects are bad for innovation. But this statement isn’t entirely true. While network effects limit competition against the dominant standard, innovation within a standard may actually blossom. Consider Windows. Microsoft has a huge advantage in the desktop operating system market, so few rivals try to compete with it. Apple’s Mac OS and the open source Linux operating system are the firm’s only credible rivals, and both have tiny market shares. But the dominance of Windows is a magnet for developers to innovate within the standard. Programmers with novel ideas are willing to make the investment in learning to write software for Windows because they’re sure that a Windows version can be used by the overwhelming majority of computer users. By contrast, look at the mess we initially had in the mobile phone market. With so many different handsets offering different screen sizes, running different software, having different key layouts, and working on different carrier networks, writing a game that’s accessible by the majority of users is nearly impossible. Glu Mobile, a maker of online games, launched fifty-six reengineered builds of Monopoly to satisfy the diverse requirements of just one telecom carrier (Hutheesing, 2006). As a result, entrepreneurs with great software ideas for the mobile market were deterred because writing, marketing, and maintaining multiple product versions is both costly and risky. It wasn’t until Apple’s iPhone arrived, offering developers both a huge market and a consistent set of development standards, that third-party software development for mobile phones really took off. Key Takeaways • Unseating a firm that dominates with network effects can be extremely difficult, especially if the newcomer is not compatible with the established leader. Newcomers will find their technology will need to be so good that it must leapfrog not only the value of the established firm’s tech, but also the perceived stability of the dominant firm, the exchange benefits provided by the existing user base, and the benefits from any product complements. For evidence, just look at how difficult it’s been for rivals to unseat the dominance of Windows. • Because of this, network effects might limit the number of rivals that challenge a dominant firm. But the establishment of a dominant standard may actually encourage innovation within the standard, since firms producing complements for the leader have faith the leader will have staying power in the market. Questions and Exercises 1. How is competition in markets where network effects are present different from competition in traditional markets? 2. What are the reasons it is so difficult for late-moving, incompatible rivals to compete in markets where a dominant, proprietary standard is present? What is technological leapfrogging and why is it so difficult to accomplish? 3. Does it make sense to try to prevent monopolies in markets where network effects exist? 4. Are network effects good or bad for innovation? Explain. 5. What is the relationship between network effects and the bargaining power of participants in a network effects “ecosystem”? 6. Cite examples where the best technology did not dominate a network effects-driven market. 1Adapted from J. Gallaugher and Y. Wang, “Linux vs. Windows in the Middle Kingdom: A Strategic Valuation Model for Platform Competition” (paper, Proceedings of the 2008 Meeting of Americas Conference on Information Systems, Toronto, CA, August 2008), extending M. Schilling, “Technological Leapfrogging: Lessons from the U.S. Video Game Console Industry,” California Management Review, Spring 2003.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/06%3A_Understanding_Network_Effects/6.04%3A_How_Are_These_Markets_Different.txt
Learning Objectives After studying this section you should be able to do the following: 1. Plot strategies for competing in markets where network effects are present, both from the perspective of the incumbent firm and the new market entrant. 2. Give examples of how firms have leveraged these strategies to compete effectively. Why do you care whether networks are one-sided, two-sided, or some sort of hybrid? Well, when crafting your plan for market dominance, it’s critical to know if network effects exist, how strong they might be, where they come from, and how they might be harnessed to your benefit. Here’s a quick rundown of the tools at your disposal when competing in the presence of network effects. Strategies for Competing in Markets with Network Effects (Examples in Parentheses) • Move early (Yahoo! Auctions in Japan) • Subsidize product adoption (PayPal) • Leverage viral promotion (Skype; Facebook feeds) • Expand by redefining the market to bring in new categories of users (Nintendo Wii) or through convergence (iPhone). • Form alliances and partnerships (NYCE vs. Citibank) • Establish distribution channels (Java with Netscape; Microsoft bundling Media Player with Windows) • Seed the market with complements (Blu-ray; Nintendo) • Encourage the development of complementary goods—this can include offering resources, subsidies, reduced fees, market research, development kits, venture capital (Facebook fbFund). • Maintain backward compatibility (Apple’s Mac OS X Rosetta translation software for PowerPC to Intel) • For rivals, be compatible with larger networks (Apple’s move to Intel; Live Search Maps) • For incumbents, constantly innovate to create a moving target and block rival efforts to access your network (Apple’s efforts to block access to its own systems) • For large firms with well-known followers, make preannouncements (Microsoft) • Move Early In the world of network effects, this is a biggie. Being first allows your firm to start the network effects snowball rolling in your direction. In Japan, worldwide auction leader eBay showed up just five months after Yahoo! launched its Japanese auction service. But eBay was never able to mount a credible threat and ended up pulling out of the market. Being just five months late cost eBay billions in lost sales, and the firm eventually retreated, acknowledging it could never unseat Yahoo!’s network effects lead. Another key lesson from the loss of eBay Japan? Exchange depends on the ability to communicate! EBay’s huge network effects in the United States and elsewhere didn’t translate to Japan because most Japanese aren’t comfortable with English, and most English speakers don’t know Japanese. The language barrier made Japan a “greenfield” market with no dominant player, and Yahoo!’s early move provided the catalyst for victory. Timing is often critical in the video game console wars, too. Sony’s PlayStation 2 enjoyed an eighteen-month lead over the technically superior Xbox (as well as Nintendo’s GameCube). That time lead helped to create what for years was the single most profitable division at Sony. By contrast, the technically superior PS3 showed up months after Xbox 360 and at roughly the same time as the Nintendo Wii, and has struggled in its early years, racking up multibillion-dollar losses for Sony (Null, 2008). What If Microsoft Threw a Party and No One Showed Up? Microsoft launched the Zune media player with features that should be subject to network effects—the ability to share photos and music by wirelessly “squirting” content to other Zune users. The firm even promoted Zune with the tagline “Welcome to the Social.” Problem was the Zune Social was a party no one wanted to attend. The late-arriving Zune garnered a market share of just 3 percent, and users remained hard pressed to find buddies to leverage these neat social features (Walker, 2008). A cool idea does not make a network effect happen.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/06%3A_Understanding_Network_Effects/6.05%3A_Competing_When_Network_Effects_Matter.txt
Learning Objectives After studying this section you should be able to do the following: 1. Recognize the unexpected rise and impact of social media and peer production systems, and understand how these services differ from prior generation tools. 2. List the major classifications of social media services. Over the past few years a fundamentally different class of Internet services has attracted users, made headlines, and increasingly garnered breathtaking market valuations. Often referred to under the umbrella term “Web 2.0,” these new services are targeted at harnessing the power of the Internet to empower users to collaborate, create resources, and share information in a distinctly different way from the static Web sites and transaction-focused storefronts that characterized so many failures in the dot-com bubble. Blogs, wikis, social networks, photo and video sharing sites, and tagging systems all fall under the Web 2.0 moniker, as do a host of supporting technologies and related efforts. The term Web 2.0 is a tricky one because like so many popular technology terms there’s not a precise definition. Coined by publisher and pundit Tim O’Reilly in 2003, techies often joust over the breadth of the Web 2.0 umbrella and over whether Web 2.0 is something new or simply an extension of technologies that have existed since the creation of the Internet. These arguments aren’t really all that important. What is significant is how quickly the Web 2.0 revolution came about, how unexpected it was, and how deeply impactful these efforts have become. Some of the sites and services that have evolved and their Web 1.0 counterparts are listed in Table 7.1 “Web 1.0 versus Web 2.0”1. Table 7.1 Web 1.0 versus Web 2.0 Web 1.0 Web 2.0 DoubleClick Google AdSense Ofoto Flickr Akamai BitTorrent mp3.com Napster Britannica Online Wikipedia personal Web sites blogging evite upcoming.org and Eventful domain name speculation search engine optimization page views cost per click screen scraping Web services publishing participation content management systems wikis directories (taxonomy) tagging (“folksonomy”) stickiness syndication instant messaging Twitter Monster.com LinkedIn To underscore the speed with which Web 2.0 arrived on the scene, and the impact of leading Web 2.0 services, consider the following efforts: • According to a spring 2008 report by Morgan Stanley, Web 2.0 services ranked as seven of the world’s top ten most heavily trafficked Internet sites (YouTube, Live.com, MySpace, Facebook, Hi5, Wikipedia, and Orkut); only one of these sites (MySpace) was on the list in 2005 (Stanley, 2008). • With only seven full-time employees and an operating budget of less than \$1 million, Wikipedia has become the Internet’s fifth most visited site on the Internet (Kane & Fichman, 2009). The site boasts well over fifteen million articles in over two hundred sixty different languages, all of them contributed, edited, and fact-checked by volunteers. • Just two years after it was founded, MySpace was bought for \$580 million by Rupert Murdoch’s News Corporation (the media giant that owns the Wall Street Journal and the Fox networks, among other properties). By the end of 2007, the site accounted for some 12 percent of Internet minutes and had repeatedly ranked as the most-visited Web site in the United States (Chmielewski & Guynn, 2008). But rapid rise doesn’t always mean a sustained following, and by the start of 2010, some were beginning to write the service’s obituary as it failed to keep pace with Facebook (Malik, 2010). • The population of rival Facebook is now so large that it could be considered the third largest “nation” in the world. Half the site’s users log in at least once a day, spending an average of fifty-five minutes a day on the site2. A fall 2007 investment from Microsoft pegged the firm’s overall value at \$15 billion, a number that would have made it the fifth most valuable Internet firm, despite annual revenues at the time of only \$150 million (Arrington, 2007). Those revenues have been growing, with the privately held firm expected to bring in from \$1.2 to \$2 billion in 2010 (Vascellaro, 2010). • Just twenty months after its founding, YouTube was purchased by Google for \$1.65 billion. While Google struggles to figure out how to make profitable what is currently a money-losing resource hog (over twenty hours of video are uploaded to YouTube each minute) (Nakashima, 2008) the site has emerged as the Web’s leading destination for video, hosting everything from apologies from JetBlue’s CEO for service gaffes to questions submitted as part of the 2008 U.S. presidential debates. Fifty percent of YouTube’s roughly three hundred million users visit the site at least once a week (Stanley, 2008). • Twitter has emerged as a major force that can break news and shape public opinion. China and Iran are among the governments so threatened by the power of Twitter-fueled data sharing that each has, at times, blocked Twitter access within their borders. At the first Twitter-focused Chirp conference in April 2010, Twitter boasted a population of over one hundred million users who have collectively posted more than ten billion tweets (Twitter messages). By this time, the service had also spawned an ecosystem of over one hundred thousand registered Twitter-supporting apps. In another nod to the service’s significance, the U.S. Library of Congress announced plans to archive every tweet ever sent (Bolton, 2010; Shaer, 2010). • Services such as Twitter, Yelp, and the highly profitable TripAdvisor have unleashed the voice of the customer so that it is now often captured and broadcast immediately at the point of service. Reviews are now incorporated into search results and maps, making them the first thing many customers see when encountering a brand online. TripAdvisor, with just five hundred employees, contributes over \$150 million in profits to parent company Expedia (at roughly 50 percent margins) (Wash, 2009; Burrows, 2010), Table 7.2 Major Social Media Tools Description Features Technology Providers Use Case Examples Blogs Short for “Web log”—an online diary that keeps a running chronology of entries. Readers can comment on posts. Can connect to other blogs through blog rolls or trackbacks. Key uses: Share ideas, obtain feedback, mobilize a community. • Ease of use • Reverse chronology • Comment threads • Persistence • Searchability • Tags • Trackbacks • Blogger (Google) • WordPress • Six Apart (TypePad and Movable Type) • Tumblr • News outlets • Google • Graco • GM • Kaiser Permanente • Marriott • Microsoft Wikis A Web site that anyone can edit directly from within the browser. Key uses: Collaborate on common tasks or to create a common knowledge base. • All changes are attributed • A complete revision history is maintained, with the ability to roll back changes and revert to earlier versions • Automatic notification of updates • Searchability • Tags • Monitoring • Socialtext • PBWorks • Google Sites • WetPaint • Microsoft SharePoint • Apple OS X Server • Dresdner Kleinwort Wasserstein • eBay • The FBI, CIA, and other intelligence agencies • Intuit • Pixar Electronic Social Network Online community that allows users to establish a personal profile, link to other profiles (i.e., friends), share content, and communicate with members via messaging, posts. Key Uses: Discover and reinforce affiliations; identify experts; message individuals or groups; virally share media. • Detailed personal profiles using multimedia • Affiliations with groups • Affiliations with individuals • Messaging and public discussions • Media sharing • “Feeds” of recent activity among members • Facebook • LinkedIn • MySpace • Ning • SelectMinds • LiveWorld • IBM/Lotus Connections • Salesforce.com • Socialtext • Barack Obama (campaign and government organizing) • Currensee (foreign exchange trading) • Dell • Deloitte Consulting • Goldman-Sachs • IBM • Reuters • Starbucks Micro- blogging Short, asynchronous messaging system. Users send messages to “followers.” Key Uses: distribute time-sensitive information, share opinions, virally spread ideas, run contests and promotions, solicit feedback, provide customer support, track commentary on firms/products/issues, organize protests. • 140-character messages sent and received from mobile device • Ability to respond publicly or privately • Can specify tags to classify discussion topics for easy searching and building comment threads • Follower lists • Twitter • Socialtext Signals • Yammer • Salesforce.com (Chatter) • Dell • Starbucks • Intuit • Small businesses • Celebrities • Zappos Millions of users, billions of dollars, huge social impact, and these efforts weren’t even on the radar of most business professionals when today’s graduating college seniors first enrolled as freshmen. The trend demonstrates that even some of the world’s preeminent thought leaders and business publications can be sideswiped by the speed of the Internet. Consider that when management guru Michael Porter wrote a piece titled “Strategy and the Internet” at the end of the dot-com bubble, he lamented the high cost of building brand online, questioned the power of network effects, and cast a skeptical eye on ad-supported revenue models. Well, it turns out Web 2.0 efforts challenged all of these concerns. Among the efforts above, all built brand on the cheap with little conventional advertising, and each owes their hypergrowth and high valuation to their ability to harness the network effect. While the Web 2.0 moniker is a murky one, we’ll add some precision to our discussion of these efforts by focusing on peer production, perhaps Web 2.0’s most powerful feature, where users work, often collaboratively, to create content and provide services online. Web-based efforts that foster peer production are often referred to as social media or user-generated content sites. These sites include blogs; wikis; social networks like Facebook and MySpace; communal bookmarking and tagging sites like Del.icio.us; media sharing sites like YouTube and Flickr; and a host of supporting technologies. And it’s not just about media. Peer-produced services like Skype and BitTorrent leverage users’ computers instead of a central IT resource to forward phone calls and video. This ability saves their sponsors the substantial cost of servers, storage, and bandwidth. Peer production is also leveraged to create much of the open source software that supports many of the Web 2.0 efforts described above. Techniques such as crowdsourcing, where initially undefined groups of users band together to solve problems, create code, and develop services, are also a type of peer production. These efforts often seek to leverage the so-called wisdom of crowds, the idea that a large, diverse group often has more collective insight than a single or small group of trained professionals. These efforts will be expanded on below, along with several examples of their use and impact. Key Takeaways • A new generation of Internet applications is enabling consumers to participate in creating content and services online. Examples include Web 2.0 efforts such as social networks, blogs, and wikis, as well as efforts such as Skype and BitTorrent, which leverage the collective hardware of their user communities to provide a service. • These efforts have grown rapidly, most with remarkably little investment in promotion. Nearly all of these new efforts leverage network effects to add value and establish their dominance and viral marketing to build awareness and attract users. • Experts often argue whether Web 2.0 is something new or merely an extension of existing technologies. The bottom line is the magnitude of the impact of the current generation of services. • Peer production and social media fall under the Web 2.0 umbrella. These services often leverage the wisdom of crowds to provide insight or production that can be far more accurate or valuable than that provided by a smaller group of professionals. • Network effects play a leading role in enabling Web 2.0 firms. Many of these services also rely on ad-supported revenue models. Questions and Exercises 1. What distinguishes Web 2.0 technologies and services from the prior generation of Internet sites? 2. Several examples of rapidly rising Web 2.0 efforts are listed in this section. Can you think of other dramatic examples? Are there cautionary tales of efforts that may not have lived up to their initial hype or promise? Why do you suppose they failed? 3. Make your own list of Web 1.0 and Web 2.0 services and technologies. Would you invest in them? Why or why not? 4. In what ways do Web 2.0 efforts challenge the assumptions that Michael Porter made regarding Strategy and the Internet? 1Adapted from T. O’Reilly, “What Is Web 2.0?” O’Reilly, September 30, 2005. 2“Facebook Facts and Figures (History and Statistics),” Website Monitoring Blog, March 17, 2010.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/07%3A_Peer_Production_Social_Media_and_Web_2.0/7.01%3A_Introduction.txt
Learning Objectives After studying this section you should be able to do the following: 1. Know what blogs are and how corporations, executives, individuals, and the media use them. 2. Understand the benefits and risks of blogging. 3. Appreciate the growth in the number of blogs, their influence, and their capacity to generate revenue. Blogs (short for Web logs) first emerged almost a decade ago as a medium for posting online diaries. (In a perhaps apocryphal story, Wired magazine claimed the term “Web log” was coined by Jorn Barger, a sometimes homeless, yet profoundly prolific, Internet poster). From humble beginnings, the blogging phenomenon has grown to a point where the number of public blogs tracked by Technorati (the popular blog index) has surpassed one hundred million (Takahashi, 2008). This number is clearly a long tail phenomenon, loaded with niche content that remains “discoverable” through search engines and blog indexes. Trackbacks (third-party links back to original blog post), and blog rolls (a list of a blogger’s favorite sites—a sort of shout-out to blogging peers) also help distinguish and reinforce the reputation of widely read blogs. The most popular blogs offer cutting-edge news and commentary, with postings running the gamut from professional publications to personal diaries. While this cacophony of content was once dismissed, blogging is now a respected and influential medium. Some might say that many of the most popular blogs have grown beyond the term, transforming into robust media enterprises. Consider that the political blog The Huffington Post is now more popular than all but eight newspaper sites and has a valuation higher than many publicly traded papers (Alterman, 2008; Learmonth, 2008). Keep in mind that this is a site without the sports, local news, weather, and other content offered by most papers. Ratings like this are hard to achieve—most bloggers can’t make a living off their musings. But among the elite ranks, killer subscriber numbers are a magnet for advertisers. Top blogs operating on shoestring budgets can snare several hundred thousand dollars a month in ad revenue (Zuckerman, 2007). Most start with ad networks like Google AdSense, but the most elite engage advertisers directly for high-value deals and extended sponsorships. Top blogs have begun to attract well-known journalists away from print media. The Huffington Post hired a former Washington Post editor Lawrence Roberts to head the site’s investigative unit. The popular blog TechCrunch now features posts by Sarah Lacy (a BusinessWeek cover-story writer) and has hired Erick Schonfeld away from Time Warner’s business publishing empire. Schonfeld’s colleague, Om Malik, has gone on to found another highly ranked tech industry blog, GigaOM. Senior executives from many industries have also begun to weigh in with online ruminations, going directly to the people without a journalist filtering their comments. Hotel chief Bill Marriott, Paul Levy (CEO of health care quality leader Beth Israel Deaconess Medical Center), Toyota’s Akio Toyoda, and Zappos’ CEO Tony Hsieh use their blogs for purposes that include a combination of marketing, sharing ideas, gathering feedback, press response, image shaping, and reaching consumers directly without press filtering. Blogs have the luxury of being more topically focused than traditional media, with no limits on page size, word count, or publication deadline. Some of the best examples engage new developments in topic domains much more quickly and deeply than traditional media. For example, it’s not uncommon for blogs focused on the law or politics to provide a detailed dissection of a Supreme Court opinion within hours of its release—offering analysis well ahead of, and with greater depth, than via what bloggers call the mainstream media (MSM). As such, it’s not surprising that most mainstream news outlets have begun supplementing their content with blogs that can offer greater depth, more detail, and deadline-free timeliness. Blogs While the feature set of a particular blog depends on the underlying platform and the preferences of the blogger, several key features are common to most blogs: • Ease of use. Creating a new post usually involves clicking a single button. • Reverse chronology. Posts are listed in reverse order of creation, making it easy to see the most recent content. • Comment threads. Readers can offer comments on posts. • Persistence. Posts are maintained indefinitely at locations accessible by permanent links. • Searchability. Current and archived posts are easily searchable. • Tags. Posts are often classified under an organized tagging scheme. • Trackbacks. Allows an author to acknowledge the source of an item in their post, which allows bloggers to follow the popularity of their posts among other bloggers. The voice of the blogosphere can wield significant influence. Examples include leading the charge for Dan Rather’s resignation and prompting the design of a new insulin pump. In an example of what can happen when a firm ignores social media, consider the flare-up Ingersoll Rand faced when the online community exposed a design flaw in its Kryptonite bike lock. Online posts showed the thick metal lock could be broken with a simple ball-point pen. A video showing the hack was posted online. When Ingersoll Rand failed to react quickly, the blogosphere erupted with criticism. Just days after online reports appeared, the mainstream media picked up the story. The New York Times ran a piece titled “The Pen Is Mightier Than the Lock” that included a series of photos demonstrating the ballpoint Kryptonite lock pick. The event tarnished the once-strong brand and eventually resulted in a loss of over \$10 million. Like any Web page, blogs can be public, tucked behind a corporate firewall, or password protected. Most blogs offer a two-way dialogue, allowing users to comment on posts (sort of instant “letters to the editor,” posted online and delivered directly to the author). The running dialogue can read like an electronic bulletin board, and can be an effective way to gather opinion when vetting ideas. Comments help keep a blogger honest. Just as the “wisdom of crowds” keeps Wikipedia accurate, a vigorous community of commenters will quickly expose a blogger’s errors of fact or logic. Despite this increased popularity, blogging has its downside. Blog comments can be a hothouse for spam and the disgruntled. Ham-handed corporate efforts (such as poor response to public criticism or bogus “praise posts”) have been ridiculed. Employee blogging can be difficult to control and public postings can “live” forever in the bowels of an Internet search engine or as content pasted on other Web sites. Many firms have employee blogging and broader Internet posting policies to guide online conduct that may be linked to the firm (see Section 7.9 “Get SMART: The Social Media Awareness and Response Team”). Bloggers, beware—there are dozens of examples of workers who have been fired for what employers viewed as inappropriate posts. Blogs can be hosted via third-party services (Google Blogger, WordPress, Tumblr, TypePad, Windows Live Spaces), with most offering a combination of free and premium features. Blogging features have also been incorporated into social networks such as Facebook, MySpace, and Ning, as well as corporate social media platforms such as Socialtext. Blogging software can also be run on third-party servers, allowing the developer more control in areas such as security and formatting. The most popular platform for users choosing to host their own blog server is the open source WordPress system. In the end, the value of any particular blog derives from a combination of technical and social features. The technical features make it easy for a blogger and his or her community to engage in an ongoing conversation on some topic of shared interest. But the social norms and patterns of use that emerge over time in each blog are what determine whether technology features will be harnessed for good or ill. Some blogs develop norms of fairness, accuracy, proper attribution, quality writing, and good faith argumentation, and attract readers that find these norms attractive. Others mix it up with hotly contested debate, one-sided partisanship, or deliberately provocative posts, attracting a decidedly different type of discourse. Key Takeaways • Blogs provide a rapid way to distribute ideas and information from one writer to many readers. • Ranking engines, trackbacks, and comments allow a blogger’s community of readers to spread the word on interesting posts and participate in the conversation, and help distinguish and reinforce the reputations of widely read blogs. • Well-known blogs can be powerfully influential, acting as flashpoints on public opinion. • Firms ignore influential bloggers at their peril, but organizations should also be cautious about how they use and engage blogs, and avoid flagrantly promotional or biased efforts. • Top blogs have gained popularity, valuations, and profits that far exceed those of many leading traditional newspapers, and leading blogs have begun to attract well-known journalists away from print media. • Senior executives from several industries use blogs for business purposes, including marketing, sharing ideas, gathering feedback, press response, image shaping, and reaching consumers directly without press filtering. Questions and Exercises 1. Visit Technorati and find out which blogs are currently the most popular. Why do you suppose the leaders are so popular? 2. How are popular blogs discovered? How is their popularity reinforced? 3. Are blog comment fields useful? If so, to whom or how? What is the risk associated with allowing users to comment on blog posts? How should a blogger deal with comments that they don’t agree with? 4. Why would a corporation, an executive, a news outlet, or a college student want to blog? What are the benefits? What are the concerns? 5. Identify firms and executives that are blogging online. Bring examples to class and be prepared to offer your critique of their efforts. 6. How do bloggers make money? Do all bloggers have to make money? Do you think the profit motive influences their content? 7. Investigate current U.S. Federal Trade Commission laws (or the laws in your home country) that govern bloggers and other social media use. How do these restrictions impact how firms interact with bloggers? What are the penalties and implications if such rules aren’t followed? Are there unwritten rules of good practice that firms and bloggers should consider as well? What might those be? 8. According to your reading, how does the blog The Huffington Post compare with the popularity of newspaper Web sites? 9. What advantage do blogs have over the MSM? What advantage does the MSM have over the most popular blogs? 10. Start a blog using Blogger.com, WordPress.com, or some other blogging service. Post a comment to another blog. Look for the trackback field when making a post, and be sure to enter the trackback for any content you cite in your blog.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/07%3A_Peer_Production_Social_Media_and_Web_2.0/7.02%3A_Blogs.txt
Learning Objectives After studying this section you should be able to do the following: 1. Know what wikis are and how they are used by corporations and the public at large. 2. Understand the technical and social features that drive effective and useful wikis. 3. Suggest opportunities where wikis would be useful and consider under what circumstances their use may present risks. 4. Recognize how social media such as wikis and blogs can influence a firm’s customers and brand. A wiki is a Web site anyone can edit directly within a Web browser (provided the site grants the user edit access). Wikis derive their name from the Hawaiian word for “quick.” Ward Cunningham, the “wiki father” christened this new class of software with the moniker in honor of the wiki-wiki shuttle bus at the Honolulu airport. Wikis can indeed be one of the speediest ways to collaboratively create content online. Many popular online wikis serve as a shared knowledge repository in some domain. The largest and most popular wiki is Wikipedia, but there are hundreds of publicly accessible wikis that anyone can participate in. Each attempts to chronicle a world of knowledge within a particular domain, with examples ranging from Wine Wiki for oenophiles to Wookieepedia, the Star Wars wiki. But wikis can be used for any collaborative effort—from meeting planning to project management. And in addition to the hundreds of public wikis, there are many thousand more that are hidden away behind firewalls, used as proprietary internal tools for organizational collaboration. Like blogs, the value of a wiki derives from both technical and social features. The technology makes it easy to create, edit, and refine content; learn when content has been changed, how and by whom; and to change content back to a prior state. But it is the social motivations of individuals (to make a contribution, to share knowledge) that allow these features to be harnessed. The larger and more active a wiki community, the more likely it is that content will be up-to-date and that errors will be quickly corrected (again, we see the influence of network effects, where products and services with larger user bases become more valuable). Several studies have shown that large community wiki entries are as or more accurate than professional publication counterparts (Lichter, 2009; Kane, et. al., 2009). Want to add to or edit a wiki entry? On most sites you just click the “Edit” link. Wikis support what you see is what you get (WYSIWYG) editing that, while not as robust as traditional word processors, is still easy enough for most users to grasp without training or knowledge of arcane code or markup language. Users can make changes to existing content and can easily create new pages or articles and link them to other pages in the wiki. Wikis also provide a version history. Click the “History” link on Wikipedia, for example, and you can see when edits were made and by whom. This feature allows the community to roll back a wiki to a prior page, in the event that someone accidentally deletes key info, or intentionally defaces a page. Vandalism is a problem on Wikipedia, but it’s more of a nuisance than a crisis. A Wired article chronicled how Wikipedia’s entry for former U.S. President Jimmy Carter was regularly replaced by a photo of a “scruffy, random unshaven man with his left index finger shoved firmly up his nose” (Pink, 2005). Nasty and inappropriate, to be sure, but the Wikipedia editorial community is now so large and so vigilant that most vandalism is caught and corrected within seconds. Watch-lists for the most active targets (say the Web pages of political figures or controversial topics) tip off the community when changes are made. The accounts of vandals can be suspended, and while mischief-makers can log in under another name, most vandals simply become discouraged and move on. It’s as if an army of do-gooders follows a graffiti tagger and immediately repaints any defacement. Wikis As with blogs, a wiki’s features set varies depending on the specific wiki tool chosen, as well as administrator design, but most wikis support the following key features: • All changes are attributed, so others can see who made a given edit. • A complete revision history is maintained so changes can be compared against prior versions and rolled back as needed. • There is automatic notification and monitoring of updates; users subscribe to wiki content and can receive updates via e-mail or RSS feed when pages have been changed or new content has been added. • All the pages in a wiki are searchable. • Specific wiki pages can be classified under an organized tagging scheme. Wikis are available both as software (commercial as well as open source varieties) that firms can install on their own computers or as online services (both subscription or ad-supported) where content is hosted off-site by third parties. Since wikis can be started without the oversight or involvement of a firm’s IT department, their appearance in organizations often comes from grassroots user initiative. Many wiki services offer additional tools such as blogs, message boards, or spreadsheets as part of their feature set, making most wikis really more full-featured platforms for social computing. Jump-starting a wiki can be a challenge, and an underused wiki can be a ghost town of orphan, out-of-date, and inaccurate content. Fortunately, once users see the value of wikis, use and effectiveness often snowballs. The unstructured nature of wikis are also both a strength and weakness. Some organizations employ wikimasters to “garden” community content; “prune” excessive posts, “transplant” commentary to the best location, and “weed” as necessary. Wikipatterns.com offers a guide to the stages of wiki adoption and a collection of community-building and content-building strategies. • Examples of Wiki Use Wikis can be vital tools for collecting and leveraging knowledge that would otherwise be scattered throughout an organization; reducing geographic distance; removing boundaries between functional areas; and flattening preexisting hierarchies. Companies have used wikis in a number of ways: • At Pixar, all product meetings have an associated wiki to improve productivity. The online agenda ensures that all attendees can arrive knowing the topics and issues to be covered. Anyone attending the meeting (and even those who can’t make it) can update the agenda, post supporting materials, and make comments to streamline and focus in-person efforts. • At European investment bank Dresdner Kleinwort Wasserstein, employees use wikis for everything from setting meeting agendas to building multimedia training for new hires. Six months after launch, wiki use had surpassed activity on the firm’s established intranet. Wikis are also credited with helping to reduce Dresdner e-mail traffic by 75 percent (Carlin, 2007). • Sony’s PlayStation team uses wikis to regularly maintain one-page overviews on the status of various projects. In this way, legal, marketing, and finance staff can get quick, up-to-date status reports on relevant projects, including the latest projected deadlines, action items, and benchmark progress. Strong security measures are enforced that limit access to only those who must be in the know, since the overviews often discuss products that have not been released. • Employees at investment-advisory firm Manning and Napier use a wiki to collaboratively track news in areas of critical interest. Providing central repositories for employees to share articles and update evolving summaries on topics such as health care legislation, enables the firm to collect and focus what would otherwise be fragmented findings and insight. Now all employees can refer to central pages that each serve as a lightning rod attracting the latest and most relevant findings. • Intellipedia is a secure wiki built on Intelink, a U.S. government system connecting sixteen spy agencies, military organizations, and the Department of State. The wiki is a “magnum opus of espionage,” handling some one hundred thousand user accounts and five thousand page edits a day. Access is classified in tiers as “unclassified,” “secret,” and “top secret” (the latter hosting 439,387 pages and 57,248 user accounts). A page on the Mumbai terror attacks was up within minutes of the event, while a set of field instructions relating to the use of chlorine-based terror bombs in Iraq was posted and refined within two days of material identification—with the document edited by twenty-three users at eighteen locations (Calabrese, 2009). When brought outside the firewall, corporate wikis can also be a sort of value-generation greenhouse, allowing organizations to leverage input from their customers and partners: • Intuit has created a “community wiki” that encourages the sharing of experience and knowledge not just regarding Intuit products, such as QuickBooks, but also across broader topics its customers may be interested in, such as industry-specific issues (e.g., architecture, nonprofit) or small business tips (e.g., hiring and training employees). The TurboTax maker has also sponsored TaxAlmanac.org, a wiki-based tax resource and research community. • Microsoft leveraged its customer base to supplement documentation for its Visual Studio software development tool. The firm was able to enter the Brazilian market with Visual Studio in part because users had created product documentation in Portuguese (King, 2007). • ABC and CBS have created public wikis for the television programs Lost, The Amazing Race, and CSI, among others, offering an outlet for fans, and a way for new viewers to catch up on character backgrounds and complex plot lines. • Executive Travel, owned by American Express Publishing, has created a travel wiki for its more than one hundred and thirty thousand readers with the goal of creating what it refers to as “a digital mosaic that in theory is more authoritative, comprehensive, and useful” than comments on a Web site, and far more up-to-date than any paper-based travel guide (King, 2007). Of course, one challenge in running such a corporate effort is that there may be a competing public effort already in place. Wikitravel.org currently holds the top spot among travel-based wikis, and network effects suggest it will likely grow and remain more current than rival efforts. Don’t Underestimate the Power of Wikipedia Not only is the nonprofit Wikipedia, with its enthusiastic army of unpaid experts and editors, replacing the three-hundred-year reference reign of Encyclopedia Britannica, Wikipedia entries can impact nearly all large-sized organizations. Wikipedia is the go-to, first-choice reference site for a generation of “netizens,” and Wikipedia entries are invariably one of the top links, often the first link, to appear in Internet search results. This position means that anyone from top executives to political candidates to any firm large enough to warrant an entry has to contend with the very public commentary offered up in a Wikipedia entry. In the same way that firms monitor their online reputations in blog posts and Twitter tweets, they’ve also got to keep an eye on wikis. But firms that overreach and try to influence an entry outside of Wikipedia’s mandated neutral point of view (NPOV), risk a backlash and public exposure. Version tracking means the wiki sees all. Users on computers at right-leaning Fox News were embarrassingly caught editing the wiki page of the lefty pundit and politician Al Franken (a nemesis of Fox’s Bill O’Reilly) (Bergman, 2007); Sony staffers were flagged as editing the entry for the Xbox game Halo 3 (Williams, 2007); and none other than Wikipedia founder Jimmy Wales was criticized for editing his own Wikipedia biography (Hansen, 2005)—acts that some consider bad online form at best, and dishonest at worst. One last point on using Wikipedia for research. Remember that according to its own stated policies, Wikipedia isn’t an original information source; rather, it’s a clearinghouse for verified information. So citing Wikipedia as a reference usually isn’t considered good form. Instead, seek out original (and verifiable) sources, such as those presented via the links at the bottom of Wikipedia entries. Key Takeaways • Wikis can be powerful tools for many-to-many content collaboration, and can be ideal for creating resources that benefit from the input of many such as encyclopedia entries, meeting agendas, and project status documents. • The greater the number of wiki users, the more likely the information contained in the wiki will be accurate and grow in value. • Wikis can be public or private. • The availability of free or low-cost wiki tools can create a knowledge clearinghouse on topics, firms, products, and even individuals. Organizations can seek to harness the collective intelligence (wisdom of crowds) of online communities. The openness of wikis also acts as a mechanism for promoting organizational transparency and accountability. Questions and Exercises 1. Visit a wiki, either an established site like Wikipedia, or a wiki service like Socialtext. Make an edit to a wiki entry or use a wiki service to create a new wiki for your own use (e.g., for a class team to use in managing a group project). Be prepared to share your experience with the class. 2. What factors determine the value of a wiki? Which key concept, first introduced in Chapter 2 “Strategy and Technology: Concepts and Frameworks for Understanding What Separates Winners from Losers”, drives a wiki’s success? 3. If anyone can edit a wiki, why aren’t more sites crippled by vandalism or by inaccurate or inappropriate content? Are there technical reasons not to be concerned? Are there “social” reasons that can alleviate concern? 4. Give examples of corporate wiki use, as well as examples where firms used wikis to engage their customers or partners. What is the potential payoff of these efforts? Are there risks associated with these efforts? 5. Do you feel that you can trust content in wikis? Do you feel this content is more or less reliable than content in print encyclopedias? Than the content in newspaper articles? Why? 6. Have you ever run across an error in a wiki entry? Describe the situation. 7. Is it ethical for a firm or individual to edit their own Wikipedia entry? Under what circumstances would editing a Wikipedia entry seem unethical to you? Why? What are the risks a firm or individual is exposed to when making edits to public wiki entries? How do you suppose individuals and organizations are identified when making wiki edits? 8. Would you cite Wikipedia as a reference when writing a paper? Why or why not?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/07%3A_Peer_Production_Social_Media_and_Web_2.0/7.03%3A_Wikis.txt
Learning Objectives After studying this section you should be able to do the following: 1. Know what social networks are, be able to list key features, and understand how they are used by individuals, groups, and corporations. 2. Understand the difference between major social networks MySpace, Facebook, and LinkedIn. 3. Recognize the benefits and risks of using social networks. 4. Be aware of trends that may influence the evolution of social networks. Social networks have garnered increasing attention as established networks grow and innovate, new networks emerge, and value is demonstrated. MySpace signed a billion-dollar deal to carry ads from Google’s AdSense network. Meanwhile, privately held Facebook has blown past the flagging MySpace. Its leadership in privacy management, offering new features, allowing third-party applications on its platform, and providing sophisticated analytics tools to corporations and other on-site sponsors have helped the firm move beyond its college roots. LinkedIn, which rounds out the big three U.S. public social networks, has grown to the point where its influence is threatening recruiting sites like Monster.com and CareerBuilder (Boyle, 2009). It now offers services for messaging, information sharing, and even integration with the BusinessWeek Web site. Media reports often mention MySpace, Facebook, and LinkedIn in the same sentence. However, while these networks share some common features, they serve very different purposes. MySpace pages are largely for public consumption. Started by musicians, MySpace casts itself as a media discovery tool bringing together users with similar tastes (Johnson, 2010). Facebook, by contrast, is more oriented towards reinforcing existing social ties between people who already know each other. This difference leads to varying usage patterns. Since Facebook is perceived by users as relatively secure, with only invited “friends” seeing your profile, over a third of Facebook users post their mobile phone numbers on their profile pages. LinkedIn was conceived from the start as a social network for business users. The site’s profiles act as a sort of digital Rolodex that users update as they move or change jobs. Users can pose questions to members of their network, engage in group discussions, ask for introductions through mutual contacts, and comment on others’ profiles (e.g., recommending a member). Active members find the site invaluable for maintaining professional contacts, seeking peer advice, networking, and even recruiting. Carmen Hudson, Starbucks manager of enterprise staffing, states LinkedIn is “one of the best things for finding midlevel executives” (King, 2007). Such networks are also putting increasing pressure on firms to work particularly hard to retain top talent. While once HR managers fiercely guarded employee directories for fear that a list of talent may fall into the hands of rivals, today’s social networks make it easy for anyone to gain a list of a firm’s staff, complete with contact information. While these networks dominate in the United States, the network effect and cultural differences work to create islands where other social networks are favored by a particular culture or region. The first site to gain traction in a given market is usually the winner. Google’s Orkut, Mixi, and Cyworld have small U.S. followings, but are among the largest sites in Brazil, Japan, and South Korea. Research by Ipsos Insight also suggests that users in many global markets, including Brazil, South Korea, and China, are more active social networkers than their U.S. counterparts1. Perhaps the most powerful (and controversial) feature of most social networks is the feed (or newsfeed). Pioneered by Facebook but now adopted by most services, feeds provide a timely update on the activities of people or topics that an individual has an association with. Feeds can give you a heads-up when someone makes a friend, joins a group, posts a photo, or installs an application. Feeds are inherently viral. By seeing what others are doing on a social network, feeds can rapidly mobilize populations and dramatically spread the adoption of applications. Leveraging feeds, it took just ten days for the Facebook group Support the Monks’ Protest in Burma to amass over one hundred and sixty thousand Facebook members. Feeds also helped music app iLike garner three million Facebook users just two weeks after its launch (Lacy, 2008; Nicole, 2007). Its previous Web-based effort took eight months to reach those numbers. But feeds are also controversial. Many users react negatively to this sort of public broadcast of their online activity, and feed mismanagement can create public relations snafus, user discontent, and potentially open up a site to legal action. Facebook initially dealt with a massive user outcry at the launch of feeds, and faced a subsequent backlash when its Beacon service broadcast user purchases without first explicitly asking their permission, and during attempts to rework its privacy policy and make Facebook data more public and accessible. (See Chapter 8 “Facebook: Building a Business from the Social Graph” for more details.) Social Networks The foundation of a social network is the user profile, but utility goes beyond the sort of listing found in a corporate information directory. Typical features of a social network include support for the following: • Detailed personal profiles • Affiliations with groups, such as alumni, employers, hobbies, fans, health conditions) • Affiliations with individuals (e.g., specific “friends”) • Private messaging and public discussions • Media sharing (text, photos, video) • Discovery-fueling feeds of recent activity among members (e.g., status changes, new postings, photos, applications installed) • The ability to install and use third-party applications tailored to the service (games, media viewers, survey tools, etc.), many of which are also social and allow others to interact • Corporate Use of Social Networks Hundreds of firms have established “fan” pages on Facebook and communites on LinkedIn. These are now legitimate customer- and client-engagement platforms that also support advertising. If a customer has decided to press the “like” button of a firm’s Facebook page and become a “fan,” corporate information will appear in their newsfeed, gaining more user attention than the often-ignored ads that run on the sides of social networks. (For more on social networks and advertising, see Chapter 8 “Facebook: Building a Business from the Social Graph”.) But social networks have also become organizational productivity tools. Many employees have organized groups using publicly available social networking sites because similar tools are not offered by their firms. Workforce Management reported that MySpace had over forty thousand groups devoted to companies or coworkers, while Facebook had over eight thousand (Frauenheim, 2007). Assuming a large fraction of these groups are focused on internal projects, this demonstrates a clear pent-up demand for corporate-centric social networks (and creates issues as work dialogue moves outside firm-supported services). Many firms are choosing to meet this demand by implementing internal social network platforms that are secure and tailored to firm needs. At the most basic level, these networks have supplanted the traditional employee directory. Social network listings are easy to update and expand. Employees are encouraged to add their own photos, interests, and expertise to create a living digital identity. Firms such as Deloitte, Dow Chemical, and Goldman Sachs have created social networks for “alumni” who have left the firm or retired. These networks can be useful in maintaining contacts for future business leads, rehiring former employees (20 percent of Deloitte’s experienced hires are so-called boomerangs, or returning employees), or recruiting retired staff to serve as contractors when labor is tight (King, 2006). Maintaining such networks will be critical in industries like IT and health care that are likely to be plagued by worker shortages for years to come. Social networking can also be important for organizations like IBM, where some 42 percent of employees regularly work from home or client locations. IBM’s social network makes it easier to locate employee expertise within the firm, organize virtual work groups, and communicate across large distances (Bulkley, 2007). As a dialogue catalyst, a social network transforms the public directory into a font of knowledge sharing that promotes organization flattening and value-adding expertise sharing. While IBM has developed their own social network platforms, firms are increasingly turning to third-party vendors like SelectMinds (adopted by Deloitte, Dow Chemical, and Goldman Sachs) and LiveWorld (adopted by Intuit, eBay, the NBA, and Scientific American). Ning allows anyone to create a social network and currently hosts over 2.3 million separate online communities (Swisher, 2010). A Little Too Public? As with any type of social media, content flows in social networks are difficult to control. Embarrassing disclosures can emerge from public systems or insecure internal networks. Employees embracing a culture of digital sharing may err and release confidential or proprietary information. Networks could serve as a focal point for the disgruntled (imagine the activity on a corporate social network after a painful layoff). Publicly declared affiliations, political or religious views, excessive contact, declined participation, and other factors might lead to awkward or strained employee relationships. Users may not want to add a coworker as a friend on a public network if it means they’ll expose their activities, lives, persona, photos, sense of humor, and friends as they exist outside of work. And many firms fear wasted time as employees surf the musings and photos of their peers. All are advised to be cautious in their social media sharing. Employers are trawling the Internet, mining Facebook, and scouring YouTube for any tip-off that a would-be hire should be passed over. A word to the wise: those Facebook party pics, YouTube videos of open mic performances, or blog postings from a particularly militant period might not age well and may haunt you forever in a Google search. Think twice before clicking the upload button! As Socialnomics author Erik Qualman puts it, “What happens in Vegas stays on YouTube (and Flickr, Twitter, Facebook…).” Firms have also created their own online communities to foster brainstorming and customer engagement. Dell’s IdeaStorm.com forum collects user feedback and is credited with prompting line offerings, such as the firm’s introduction of a Linux-based laptop (Greenfield, 2008). At MyStarbucksIdea.com, the coffee giant has leveraged user input to launch a series of innovations ranging from splash sticks that prevent spills in to-go cups, to new menu items. Both IdeaStorm and MyStarbucksIdea run on a platform offered by Salesforce.com that not only hosts these sites but also provides integration into Facebook and other services. Starbucks (the corporate brand with the most Facebook “fans”) has extensively leveraged the site, using Facebook as a linchpin in the “Free Pastry Day” promotion (credited with generating one million in-store visits in a single day) and promotion of the firm’s AIDS-related (Starbucks) RED campaign, which garnered an astonishing three hundred ninety million “viral impressions” through feeds, wall posts, and other messaging (Brandau, 2009). Social Networks and Health Care Dr. Daniel Palestrant often shows a gruesome slide that provides a powerful anecdote for Sermo, the social network for physicians that he cofounded and where he serves as CEO. The image is of an eight-inch saw blade poking through both sides of the bloodied thumb of a construction worker who’d recently arrived in a hospital emergency room. A photo of the incident was posted to Sermo, along with an inquiry on how to remove the blade without damaging tissue or risking a severed nerve. Within minutes replies started coming back. While many replies advised to get a hand surgeon, one novel approach suggested cutting a straw lengthwise, inserting it under the teeth of the blade, and sliding the protected blade out while minimizing further tissue tears (Schulder, 2009). The example illustrates how doctors using tools like Sermo can tap into the wisdom of crowds to save thumbs and a whole lot more. Sermo is a godsend to remote physicians looking to gain peer opinion on confounding cases or other medical questions. The American Medical Association endorsed the site early on2, and the Nature scientific journals have included a “Discuss on Sermo” button alongside the online versions of their medical articles. Doctors are screened and verified to maintain the integrity of participants. Members leverage the site both to share information with each other and to engage in learning opportunities provided by pharmaceutical companies and other firms. Institutional investors also pay for special access to poll Sermo doctors on key questions, such as opinions on pending FDA drug approval. Sermo posts can send valuable warning signals on issues such as disease outbreaks or unseen drug side effects. And doctors have also used the service to rally against insurance company policy changes. While Sermo focuses on the provider side of the health care equation, a short walk from the firm’s Cambridge, Massachusetts, headquarters will bring one to PatientsLikeMe (PLM), a social network empowering chronically ill patients across a wide variety of disease states. The firm’s “openness policy” is in contrast to privacy rules posted on many sites and encourages patients to publicly track and post conditions, treatments, and symptom variation over time, using the site’s sophisticated graphing and charting tools. The goal is to help others improve the quality of their own care by harnessing the wisdom of crowds. Todd Small, a multiple sclerosis sufferer, used the member charts and data on PLM to discover that his physician had been undermedicating him. After sharing site data with his doctor, his physician verified the problem and upped the dose. Small reports that the finding changed his life, helping him walk better than he had in a decade and a half and eliminating a feeling that he described as being trapped in “quicksand” (Goetz, 2008). In another example of PLM’s people power, the site ran its own clinical trial–like experiment to rapidly investigate promising claims that the drug Lithium could improve conditions for ALS (amyotrophic lateral sclerosis) patients. While community efforts did not support these initial claims, a decision was arrived at in months, whereas previous efforts to marshal researchers and resources to focus on the relatively rare disease would have taken many years, even if funding could be found (Kane, et. al., 2009). Both Sermo and PatientsLikeMe are start-ups that are still exploring the best way to fund their efforts for growth and impact. Regardless of where these firms end up, it should be clear from these examples that social media will remain a powerful force on the health care landscape. Key Takeaways • Electronic social networks help individuals maintain contacts, discover and engage people with common interests, share updates, and organize as groups. • Modern social networks are major messaging services, supporting private one-to-one notes, public postings, and broadcast updates or “feeds.” • Social networks also raise some of the strongest privacy concerns, as status updates, past messages, photos, and other content linger, even as a user’s online behavior and network of contacts changes. • Network effects and cultural differences result in one social network being favored over others in a particular culture or region. • Information spreads virally via news feeds. Feeds can rapidly mobilize populations, and dramatically spread the adoption of applications. The flow of content in social networks is also difficult to control and sometimes results in embarrassing public disclosures. • Feeds have a downside and there have been instances where feed mismanagement has caused user discontent, public relations problems, and the possibility of legal action. • The use of public social networks within private organizations is growing, and many organizations are implementing their own, private, social networks. • Firms are also setting up social networks for customer engagement and mining these sites for customer ideas, innovation, and feedback. Questions and Exercises 1. Visit the major social networks (MySpace, Facebook, LinkedIn). What distinguishes one from the other? Are you a member of any of these services? Why or why not? 2. How are organizations like Deloitte, Goldman Sachs, and IBM using social networks? What advantages do they gain from these systems? 3. What factors might cause an individual, employee, or firm to be cautious in their use of social networks? 4. How do you feel about the feed feature common in social networks like Facebook? What risks does a firm expose itself to if it leverages feeds? How might a firm mitigate these kinds of risks? 5. What sorts of restrictions or guidelines should firms place on the use of social networks or the other Web 2.0 tools discussed in this chapter? Are these tools a threat to security? Can they tarnish a firm’s reputation? Can they enhance a firm’s reputation? How so? 6. Why do information and applications spread so quickly within networks like Facebook? What feature enables this? What key promotional concept (described in Chapter 2 “Strategy and Technology: Concepts and Frameworks for Understanding What Separates Winners from Losers”) does this feature foster? 7. Why are some social networks more popular in some nations than others? 8. Investigate social networks on your own. Look for examples of their use for fostering political and social movements; for their use in health care, among doctors, patients, and physicians; and for their use among other professional groups or enthusiasts. Identify how these networks might be used effectively, and also look for any potential risks or downside. How are these efforts supported? Is there a clear revenue model, and do you find these methods appropriate or potentially controversial? Be prepared to share your findings with your class.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/07%3A_Peer_Production_Social_Media_and_Web_2.0/7.04%3A_Electronic_Social_Networks.txt
Learning Objectives After studying this section you should be able to do the following: 1. Appreciate the rapid rise of Twitter—its scale, scope, and broad appeal. 2. Understand how Twitter is being used by individuals, organizations, and political movements. 3. Contrast Twitter and microblogging with Facebook, conventional blogs, and other Web 2.0 efforts. 4. Consider the commercial viability of the effort, its competitive environment, and concerns regarding limited revenue. Spawned in 2006 as a side project at the now-failed podcasting start-up Odeo (an effort backed by Blogger.com founder Evan Williams), Twitter has been on a rocket ride. The site’s user numbers have blasted past both mainstream and new media sites, dwarfing New York Times, LinkedIn, and Digg, among others. Reports surfaced of rebuffed buyout offers as high as \$500 million (Ante, 2009). By the firm’s first developer conference in April 2010, Twitter and its staff of 175 employees had created a global phenomenon embraced by over one hundred million users worldwide. Twitter is a microblogging service that allows users to post 140-character messages (tweets) via the Web, SMS, or a variety of third-party desktop and smartphone applications. The microblog moniker is a bit of a misnomer. The service actually has more in common with Facebook’s status updates and news feeds than it does with traditional blogs. But unlike Facebook, where most users must approve “friends” before they can see status updates, Twitter’s default setting allows for asymmetrical following (although it is possible to set up private Twitter accounts and to block followers). Sure, there’s a lot of inane “tweeting” going on—lots of meaningless updates that read, “I’m having a sandwich” or “in line at the airport.” But while not every user may have something worthwhile to tweet, many find that Twitter makes for invaluable reading, offering a sense of what friends, customers, thought leaders, and newsmakers are thinking. Twitter leadership has described the service as communicating “The Pulse of the Planet” (Schonfeld, 2009). For many, Twitter is a discovery engine, a taste-making machine, a critical source of market intelligence, a source of breaking news, and an instantaneous way to plug into the moment’s zeitgeist. Many also find Twitter to be an effective tool for quickly blasting queries to friends, colleagues, or strangers who might offer potentially valuable input. Says futurist Paul Saffo, “Instead of creating the group you want, you send it and the group self-assembles” (Miller, 2009). Users can classify comments on a given topic using hash tags (keywords preceded by the “#” or “hash” symbol), allowing others to quickly find related tweets (e.g., #iranelection, #mumbai, #swineflu, #sxsw). Any user can create a hash tag—just type it into your tweet (you may want to search Twitter first to make sure that the tag is not in use by an unrelated topic and that if it is in use, it appropriately describes how you want your tweet classified). Twitter users have broken news during disasters, terror attacks, and other major events. Dictators fear the people power Twitter enables, and totalitarian governments worldwide have moved to block citizen access to the service (prompting Twitter to work on censor-evading technology). During the 2009 Iranian election protests, the U.S. State Department even asked Twitter to postpone maintenance to ensure the service would continue to be available to support the voice and activism of Iran’s democracy advocates (Ruffini, 2009). Twitter is also emerging as a legitimate business tool. Consider the following commercial examples: • Starbucks uses Twitter in a variety of ways. It has run Twitter-based contests and used the service to spread free samples of new products, such as its VIA instant coffee line. Twitter has also been a way for the company to engage customers in its cause-based marketing efforts, such as (Starbucks) RED, which supports (Product) RED. Starbucks has even recruited staff via Twitter and was one of the first firms to participate in Twitter’s advertising model featuring “promoted tweets.” • Dell used Twitter to uncover an early warning sign indicating poor design of the keyboard on its Mini 9 Netbook PC. After a series of tweets from early adopters indicated that the apostrophe and return keys were positioned too closely together, the firm dispatched design change orders quickly enough to correct the problem when the Mini 10 was launched just three months later. By December 2009, Dell also claimed to have netted \$6.5 million in outlet store sales referred via the Twitter account @DellOutlet (more than 1.5 million followers) (Eaton, 2009) and another \$1 million from customers who have bounced from the outlet to the new products site (Abel, 2009). • Brooklyn Museum patrons can pay an additional \$20 a year for access to the private, members-only “1stFans” Twitter feed that shares information on special events and exclusive access to artist content. • Twitter is credited with having raised millions via Text-to-Donate and other fundraising efforts following the Haiti earthquake. • Twitter can be a boon for sharing time-sensitive information. The True Massage and Wellness Spa in San Francisco tweets last-minute cancellations to tell customers of an unexpected schedule opening. With Twitter, appointments remain booked solid. Gourmet food trucks, popular in many American cities, are also using Twitter to share location and create hipster buzz. Los Angeles’s Kogi Korean Taco Truck now has over sixty thousand followers and uses Twitter to reveal where it’s parked, ensuring long lines of BBQ-craving foodies. Of the firm’s success, owner Roy Choi says, “I have to give all the credit to Twitter” (Romano, 2009). • Electronics retailer Best Buy has recruited over 2,300 Blue Shirt and Geek Squad staffers to crowdsource Twitter-driven inquiries via @Twelpforce, the firm’s customer service Twitter account. Best Buy staffers register their personal Twitter accounts on a separate Best Buy–run site. Then any registered employees tweeting using the #twelpforce, will automatically have those posts echoed through @Twelpforce, with the employee’s account credited at the end of the tweet. As of November 2009, Twelpforce had provided answers to over 19,500 customer inquiries1. Figure 7.1 A Sampling of Tweets Filtered through Best Buy’s @Twelpforce Twitter Account Surgeons and residents at Henry Ford Hospital have even tweeted during brain surgery (the teaching hospital sees the service as an educational tool). Some tweets are from those so young they’ve got “negative age.” Twitter.com/kickbee is an experimental fetal monitor band that sends tweets when motion is detected: “I kicked Mommy at 08:52.” And savvy hackers are embedding “tweeting” sensors into all sorts of devices. Botanicalls, for example, offers an electronic flowerpot stick that detects when plants need care and sends Twitter status updates to owners (sample post: “URGENT! Water me!”). Organizations are well advised to monitor Twitter activity related to the firm, as it can act as a sort of canary-in-a-coal mine uncovering emerging events. Users are increasingly using the service as a way to form flash protest crowds. Amazon.com, for example, was caught off guard over a spring 2009 holiday weekend when thousands used Twitter to rapidly protest the firm’s reclassification of gay and lesbian books (hash tag #amazonfail). Others use the platform for shame and ridicule. BP has endured withering ridicule from the satire account @BPGlobalPR (followed by roughly 200,000 two months after the spill). For all the excitement, many wonder if Twitter is overhyped. Some reports suggest that many Twitter users are curious experimenters who drop the service shortly after signing up (Martin, 2009). This raises the question of whether Twitter is a durable phenomenon or just a fad. Pundits also wonder if revenues will ever justify initially high valuations and if rivals could usurp Twitter’s efforts with similar features. Thus far, Twitter has been following a “grow-first-harvest-later” approach (Murrell, 2010). The site’s rapid rise has allowed it to attract enough start-up capital to enable it to approach revenue gradually and with caution, in the hopes that it won’t alienate users with too much advertising (an approach not unlike Google’s efforts to nurture YouTube). MIT’s Technology Review reports that data sharing deals with Google and Bing may have brought in enough money to make the service profitable in 2009, but that amount was modest (just \$25 million) (Talbot, 2010). Twitter’s advertising platform is expected to be far more lucrative. Reflecting Twitter’s “deliberately cautious” approach to revenue development, the ad model featuring sponsored ‘‘promoted tweets” rolled out first as part of the search, with distribution to individual Twitter feeds progressing as the firm experiments and learns what works best for users and advertisers. Another issue—many Twitter users rarely visit the site. Most active users post and read tweets using one of many—often free—applications provided by third parties, such as Seesmic, TweetDeck, and Twhirl. This happens because Twitter made its data available for free to other developers via API (application programming interface). Exposing data can be a good move as it spawned an ecosystem of over one hundred thousand complementary third-party products and services that enhance Twitter’s reach and usefulness (generating network effects from complementary offerings similar to other “platforms” like Windows, iPhone, and Facebook). There are potential downsides to such openness. If users don’t visit Twitter.com, that lessens the impact of any ads running on the site. This creates what is known as the “free rider problem,” where users benefit from a service while offering no value in exchange. Encouraging software and service partners to accept ads for a percentage of the cut could lessen the free rider problem (Kafka, 2010). When users don’t visit a service, it makes it difficult to spread awareness of new products and features. It can also create branding challenges and customer frustration. Twitter execs lamented that customers were often confused when they searched for “Twitter” in the iPhone App Store and were presented with scores of offerings but none from Twitter itself (Goldman, 2010). Twitter’s purchase of the iPhone app Tweetie (subsequently turned into the free “Twitter for iPhone” app) and the launch of its own URL-shortening service (competing with bit.ly and others) signal that Twitter is willing to move into product and service niches and compete with third parties that are reliant on the Twitter ecosystem. Microblogging does appear to be here to stay, and the impact of Twitter has been deep, broad, stunningly swift, and at times humbling in the power that it wields. But whether Twitter will be a durable, profit-gushing powerhouse remains to be seen. Speculation on Twitter’s future hasn’t prevented many firms from commercializing new microblogging services, and a host of companies have targeted these tools for internal corporate use. Salesforce.com’s Chatter, Socialtext Signals, and Yammer are all services that have been billed as “Twitter for the Enterprise.” Such efforts allow for Twitter-style microblogging that is restricted for participation and viewing by firm-approved accounts. Key Takeaways • While many public and private microblogging services exist, Twitter remains by far the dominant service. • Unlike status updates found on services like Facebook and LinkedIn, Twitter’s default supports asymmetric communication, where someone can follow updates without first getting their approval. This function makes Twitter a good choice for anyone cultivating a following—authors, celebrities, organizations, and brand promoters. • You don’t need to tweet to get value. Many Twitter users follow friends, firms, celebrities, and thought leaders, quickly gaining access to trending topics. • Twitter hash tags (keywords preceded by the # character) are used to organize “tweets” on a given topic. Users can search on hash tags, and many third-party applications allow for Tweets to be organized and displayed by tag. • Firms are leveraging Twitter in a variety of ways, including: promotion, customer response, gathering feedback, and time-sensitive communication. • Like other forms of social media, Twitter can serve as a hothouse that attracts opinion and forces organizational transparency and accountability. • Activists have leveraged the service worldwide, and it has also served as an early warning mechanism in disasters, terror, and other events. • Despite its rapid growth and impact, significant questions remain regarding the firm’s durability, revenue prospects, and enduring appeal to initial users. • Twitter makes its data available to third parties via an API (application programming interface). The API has helped a rich ecosystem of over seventy thousand Twitter-supporting products and services emerge. But by making the Twitter stream available to third parties, Twitter may suffer from the free rider problem where others firms benefit from Twitter’s service without providing much benefit back to Twitter itself. New ad models may provide a way to distribute revenue-generating content through these services. Twitter has also begun acquiring firms that compete with other players in its ecosystem. Questions and Exercises 1. If you don’t already have one, set up a Twitter account and “follow” several others. Follow a diverse group—corporations, executives, pundits, or other organizations. Do you trust these account holders are who they say they are? Why? Which examples do you think use the service most effectively? Which provide the weaker examples of effective Twitter use? Why? Have you encountered Twitter “spam” or unwanted followers? What can you do to limit such experiences? Be prepared to discuss your experiences with class. 2. If you haven’t done so, install a popular Twitter application such as TweetDeck, Seesmic, or a Twitter client for your mobile device. Why did you select the product you chose? What advantages does your choice offer over simply using Twitter’s Web page? What challenges do these clients offer Twitter? Does the client you chose have a clear revenue model? Is it backed by a viable business? 3. Visit search.twitter.com. Which Twitter hash tags are most active at this time? Are there other “trending topics” that aren’t associated with hash tags? What do you think of the activity in these areas? Is there legitimate, productive activity happening? Search Twitter on topics, firms, brand names, and issues of interest to you. What do you think of the quality of the information you’ve uncovered on Twitter? Who might find this to be useful? 4. Why would someone choose to use Twitter over Facebook’s status update, or other services? Which (if either) do you prefer and why? 5. What do you think of Twitter’s revenue prospects? Is the firm a viable independent service or simply a feature to be incorporated into other social media activity? Advocate where you think the service will be in two years, five, ten. Would you invest in Twitter? Would you suggest that other firms do so? Why? 6. Assume the role of a manager for your firm. Advocate how the organization should leverage Twitter and other forms of social media. Provide examples of effective use, and cautionary tales, to back up your recommendation. 7. Some instructors have mandated Twitter for classroom use. Do you think this is productive? Would your professor advocate tweeting during lectures? What are the pros and cons of such use? Work with your instructor to discuss a set of common guidelines for in-class and course use of social media. 8. As of this writing, Twitter was just rolling out advertising via “promoted tweets.” Perform some additional research. How have Twitter’s attempts to grow revenues fared? How has user growth been trending? Has the firm’s estimated value increased or decreased from the offer figures cited in this chapter? Why? 9. What do you think of Twitter’s use of the API? What are the benefits of offering an API? What are the downsides? Would you create a company to take advantage of the Twitter API? Why or why not? 10. Follow this book’s author at http://twitter.com/gallaugher. Tweet him if you run across interesting examples that you think would be appropriate for the next version of the book. 1Twitter.com, “Case Study: Best Buy Twelpforce,” Twitter 101, business.twitter.com/twitter101/case_bestbuy.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/07%3A_Peer_Production_Social_Media_and_Web_2.0/7.05%3A_Twitter_and_the_Rise_of_Microblogging.txt
Learning Objectives After studying this section you should be able to do the following: 1. Know key terms related to social media, peer production, and Web 2.0, including RSS, folksonomies, mash-ups, location-based services, virtual worlds, and rich media. 2. Provide examples of the effective business use of these terms and technologies. • RSS RSS (an acronym that stands for both “really simple syndication” and “rich site summary”) enables busy users to scan the headlines of newly available content and click on an item’s title to view items of interest, thus sparing them from having to continually visit sites to find out what’s new. Users begin by subscribing to an RSS feed for a Web site, blog, podcast, or other data source. The title or headline of any new content will then show up in an RSS reader. Subscribe to the New York Times Technology news feed, for example, and you will regularly receive headlines of tech news from the Times. Viewing an article of interest is as easy as clicking the title you like. Subscribing is often as easy as clicking on the RSS icon appearing on the home page of a Web site of interest. Many firms use RSS feeds as a way to mange information overload, opting to distribute content via feed rather than e-mail. Some even distribute corporate reports via RSS. RSS readers are offered by third-party Web sites such as Google and Yahoo! and they have been incorporated into all popular browsers and most e-mail programs. Most blogging platforms provide a mechanism for bloggers to automatically publish a feed when each new post becomes available. Google’s FeedBurner is the largest publisher of RSS blog feeds, and offers features to distribute content via e-mail as well. Figure 7.2 RSS readers like Google Reader can be an easy way to scan blog headlines and click through to follow interesting stories. Figure 7.3 Web sites that support RSS feeds will have an icon in the address bar. Click it to subscribe. 7.07: Prediction Markets and the Wisdom of Crowds Learning Objectives After studying this section you should be able to do the following: 1. Understand the concept of the wisdom of crowds as it applies to social networking. 2. List the criteria necessary for a crowd to be smart. Many social software efforts leverage what has come to be known as the wisdom of crowds. In this concept, a group of individuals (the crowd often consists mostly of untrained amateurs), collectively has more insight than a single or small group of trained professionals. Made popular by author James Surowiecki (whose best-selling book was named after the phenomenon), the idea of crowd wisdom is at the heart of wikis, folksonomy tagging systems, and many other online efforts. An article in the journal Nature positively comparing Wikipedia to Encyclopedia Britannica lent credence to social software’s use in harnessing and distilling crowd wisdom (Giles, 2005). The crowd isn’t always right, but in many cases where topics are complex, problems are large, and outcomes are uncertain, a large, diverse group may bring collective insight to problem solving that one smart guy or a professional committee lacks. One technique for leveraging the wisdom of crowds is a prediction market, where a diverse crowd is polled and opinions aggregated to form a forecast of an eventual outcome. The concept is not new. The stock market is arguably a prediction market, with a stock price representing collective assessment of the discounted value of a firm’s future earnings. But Internet technologies are allowing companies to set up prediction markets for exploring all sorts of problems. Consider Best Buy, where employees are encouraged to leverage the firm’s TagTrade prediction market to make forecasts, and are offered small gifts as incentives for participation. The idea behind this incentive program is simple: the “blue shirts” (Best Buy employees) are closest to customers. They see traffic patterns and buying cycles, can witness customer reactions first hand, and often have a degree of field insight not available to senior managers at the company’s Minneapolis headquarters. Harness this collective input and you’ve got a group brain where, as wisdom of crowds proponents often put it, “the we is greater than the me.” When Best Buy asked its employees to predict gift card sales, the “crowd’s” collective average answer was 99.5 percent accurate; experts paid to make the prediction were off by 5 percent. Another experiment predicting holiday sales was off by only 1/10 of 1 percent. The experts? Off by 7 percent (Dvorak, 2008; Dye, 2008)! In an article in the McKinsey Quarterly, Surowiecki outlined several criteria necessary for a crowd to be “smart” (Dye, 2008). The crowd must • be diverse, so that participants are bringing different pieces of information to the table, • be decentralized, so that no one at the top is dictating the crowd’s answer, • offer a collective verdict that summarizes participant opinions, • be independent, so that each focuses on information rather than the opinions of others. Google, which runs several predictive markets, underscored these principles when it found that predictions were less accurate when users were geographically proximate, meaning folks in the same work group who sat near one another typically thought too much alike (Cowgill, et. al., 2009). Poorer predictive outcomes likely resulted because these relatively homogeneous clusters of users brought the same information to the table (yet another reason why organizations should hire and cultivate diverse teams). Many firms run predictive markets to aid in key forecasts, and with the potential for real financial payoff. But University of Chicago law professor Todd Henderson warns predictive markets may also hold legal and ethical challenges. The Securities and Exchange Commission may look askance at an employee who gets a heads-up in a predictive market that says a certain drug is going to be approved or fail clinical trials. If she trades on this information is she an insider, subject to prosecution for exploiting proprietary data? Disclosure issues are unclear. Gambling laws are also murky, with Henderson uncertain as to whether certain predictive markets will be viewed as an unregulated form of betting (Dye, 2008). Publicly accessible prediction markets are diverse in their focus. The Iowa Electronic Market attempts to guess the outcome of political campaigns, with mixed results. Farecast (now part of Microsoft’s Bing knowledge engine) claims a 75 percent accuracy rate for forecasting the future price of airline tickets1. The Hollywood Stock Exchange allows participants to buy and sell prediction shares of movies, actors, directors, and film-related options. The exchange, now owned by investment firm Cantor Fitzgerald, has picked Oscar winners with 90 percent accuracy (Surowiecki, 2007). And at HedgeStreet.com, participants can make microbets, wagering as little as ten dollars on the outcome of economic events, including predictions on the prices of homes, gold, foreign currencies, oil, and even the economic impact of hurricanes and tropical storms. HedgeStreet is considered a market and is subject to oversight by the Commodity Futures Trading Commission (Lambert, 2006). Key Takeaways • Many Web 2.0 efforts allow firms to tap the wisdom of crowds, identifying collective intelligence. • Prediction markets tap crowd opinion with results that are often more accurate than the most accurate expert forecasts and estimates. • Prediction markets are most accurate when tapping the wisdom of a diverse and variously skilled and experienced group, and are least accurate when participants are highly similar. Questions and Exercises 1. What makes for a “wise” crowd? When might a crowd not be so wise? 2. Find a prediction market online and participate in the effort. Be prepared to share your experience with your class, including any statistics of predictive accuracy, participant incentives, business model of the effort, and your general assessment of the appeal and usefulness of the effort. 3. Brainstorm on the kinds of organizations that might deploy prediction markets. Why might you think the efforts you suggest and advocate would be successful? 4. In what ways are legal issues of concern to prediction market operators? 1“Audit Reveals Farecast Predictive Accuracy at 74.5 percent,” farecast.live.com, May 18, 2007, www.prnewswire.com/news-relea...s-new-tools-to -help-savvy-travelers-catch-elusive-airfare-price-drops-this-summer-58165652.html.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/07%3A_Peer_Production_Social_Media_and_Web_2.0/7.06%3A_Other_Key_Web_2.0_Terms_and_Concepts.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand the value of crowdsourcing. 2. Identify firms that have used crowdsourcing successfully. The power of Web 2.0 also offers several examples of the democratization of production and innovation. Need a problem solved? Offer it up to the crowd and see if any of their wisdom offers a decent result. This phenomenon, known as crowdsourcing, has been defined by Jeff Howe, founder of the blog crowdsourcing.com and an associate editor at Wired, as “the act of taking a job traditionally performed by a designated agent (usually an employee) and outsourcing it to an undefined, generally large group of people in the form of an open call” (Howe, 2006). Can the crowd really do better than experts inside a firm? At least one company has literally struck gold using crowdsourcing. As told by Don Tapscott and Anthony Williams in their book Wikinomics, mining firm Goldcorp was struggling to gain a return from its 55,000-acre Canadian property holdings. Executives were convinced there was gold “in them thar hills,” but despite years of efforts, the firm struggled to strike any new pay dirt. CEO Rob McEwen, a former mutual fund manager without geology experience who unexpectedly ended up running Goldcorp after a takeover battle, then made what seemed a Hail Mary pass—he offered up all the firm’s data, on the company’s Web site. Along with the data, McEwen ponied up \$575,000 from the firm as prize money for the Goldcorp Challenge to anyone who came up with the best methods and estimates for reaping golden riches. Releasing data was seen as sacrilege in the intensely secretive mining industry, but it brought in ideas the firm had never considered. Taking the challenge was a wildly diverse group of “graduate students, consultants, mathematicians, and military officers.” Eighty percent of the new targets identified by entrants yielded “substantial quantities of gold.” The financial payoff? In just a few years a one-hundred-million-dollar firm grew into a nine-billion-dollar titan. For Goldcorp, the crowd coughed up serious coin. Netflix followed Goldcorp’s lead, offering anonymous data to any takers, along with a one-million-dollar prize to the first team that could improve the accuracy of movie recommendations by 10 percent. Top performers among the over thirty thousand entrants included research scientists from AT&T Labs, researchers from the University of Toronto, a team of Princeton undergrads, and the proverbial “guy in a garage” (and yes, that was his team name). Frustrated for nearly three years, it took a coalition of four teams from Austria, Canada, Israel, and the United States to finally cross the 10 percent threshold. The winning team represented an astonishing brain trust that Netflix would never have been able to harness on its own (Lohr, 2009). Other crowdsourcers include Threadless.com, which produces limited run t-shirts with designs users submit and vote on. Marketocracy runs stock market games and has created a mutual fund based on picks from the 100 top-performing portfolios. Just under seven years into the effort, the firm’s m100 Index reports a 75 percent return versus 35 percent for the S&P 500. The St. Louis Cardinals baseball team is even crowdsourcing. The club’s One for the Birds contest calls for the fans to submit scouting reports on promising players, as the team hopes to broaden its recruiting radar beyond its classic recruiting pool of Division I colleges. There are several public markets for leveraging crowdsourcing for innovation, or as an alternative to standard means of production. Waltham, Massachusetts–based InnoCentive allows “seekers” to offer cash prizes ranging from ten to one hundred thousand dollars. Over one hundred twenty thousand “solvers” have registered to seek solutions for tasks put forward by seekers that include Dow Chemical, Eli Lilly, and Procter & Gamble. Among the findings offered by the InnoCentive crowd are a biomarker that measures progression of ALS. Amazon.com has even created an online marketplace for crowdsourcing called Mechanical Turk. Anyone with a task to be completed or problem to be solved can put it up for Amazon, setting their price for completion or solution. For its role, Amazon takes a small cut of the transaction. And alpha geeks looking to prove their code chops can turn to TopCoder, a firm that stages coding competitions that deliver real results for commercial clients such as ESPN. By 2009, TopCoder contests had attracted over 175,000 participants from 200 countries1 (Brandel, 2007; Brandel, 2008). Not all crowdsourcers are financially motivated. Some benefit by helping to create a better service. Facebook leveraged crowd wisdom to develop versions of its site localized in various languages. Facebook engineers designated each of the site’s English words or phrases as a separate translatable object. Members were then invited to translate the English into other languages, and rated the translations to determine which was best. Using this form of crowdsourcing, fifteen hundred volunteers cranked out Spanish Facebook in a month. It took two weeks for two thousand German speakers to draft Deutsch Facebook. How does the Facebook concept of “poke” translate around the world? The Spaniards decided on “dar un toque,” Germans settled on “anklopfen,” and the French went with “envoyer un poke” (Kirkpatrick, 2008). Vive le crowd! Key Takeaways • Crowdsourcing tackles challenges through an open call to a broader community of potential problem solvers. Examples include Goldcorp’s discovering of optimal mining locations in land it already held, Facebook’s leverage of its users to create translations of the site for various international markets, and Netflix’s solicitation of improvements to its movie recommendation software. • Several firms run third-party crowdsourcing forums, among them InnoCentive for scientific R&D, TopCoder for programming tasks, and Amazon’s Mechanical Turk for general work. Questions and Exercises 1. What is crowdsourcing? Give examples of organizations that are taking advantage of crowdsourcing and be prepared to describe these efforts. 2. What ethical issues should firms be aware of when considering crowdsourcing? Are there other concerns firms may have when leveraging this technique? 3. Assume the role of a manager or consultant. Recommend a firm and a task that would be appropriate for crowdsourcing. Justify your choice, citing factors such as cost, breadth of innovation, time, constrained resources, or other factors. How would you recommend the firm conduct this crowdsourcing effort? 1TopCoder, 2009, http://topcoder.com/home.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/07%3A_Peer_Production_Social_Media_and_Web_2.0/7.08%3A_Crowdsourcing.txt
Learning Objectives After studying this section you should be able to do the following: 1. Illustrate several examples of effective and poor social media use. 2. Recognize the skills and issues involved in creating and staffing an effective social media awareness and response team (SMART). 3. List and describe key components that should be included in any firm’s social media policy. 4. Understand the implications of ethical issues in social media such as “sock puppetry” and “astroturfing” and provide examples and outcomes of firms and managers who used social media as a vehicle for dishonesty. 5. List and describe tools for monitoring social media activity relating to a firm, its brands, and staff. 6. Understand issues involved in establishing a social media presence, including the embassy approach, openness, and staffing. 7. Discuss how firms can engage and respond through social media, and how companies should plan for potential issues and crises. For an example of how outrage can go viral, consider Dave Carroll1. The Canadian singer-songwriter was traveling with his band Sons of Maxwell on a United Airlines flight from Nova Scotia to Nebraska when, during a layover at Chicago’s O’Hare International Airport, Carroll saw baggage handlers roughly tossing his guitar case. The musician’s \$3,500 Taylor guitar was in pieces by the time it arrived in Omaha. In the midst of a busy tour schedule, Carroll didn’t have time to follow up on the incident until after United’s twenty-four-hour period for filing a complaint for restitution had expired. When United refused to compensate him for the damage, Carroll penned the four-minute country ditty “United Breaks Guitars,” performed it in a video, and uploaded the clip to YouTube (sample lyrics: “I should have gone with someone else or gone by car…’cuz United breaks guitars”). Carroll even called out the unyielding United rep by name. Take that, Ms. Irlwig! (Note to customer service reps everywhere: you’re always on.) The clip went viral, receiving 150,000 views its first day and five million more by the next month. Well into the next year, “United Breaks Guitars” remained the top result on YouTube when searching the term “United.” No other topic mentioning that word—not “United States,” “United Nations,” or “Manchester United”—ranked ahead of this one customer’s outrage. Video Dave Carroll’s ode to his bad airline experience, “United Breaks Guitars,” went viral, garnering millions of views. Scarring social media posts don’t just come from outside the firm. Earlier that same year employees of Domino’s Pizza outlet in Conover, North Carolina, created what they thought would be a funny gross-out video for their friends. Posted to YouTube, the resulting footage of the firm’s brand alongside vile acts of food prep was seen by over one million viewers before it was removed. Over 4.3 million references to the incident can be found on Google, and many of the leading print and broadcast outlets covered the story. The perpetrators were arrested, the Domino’s storefront where the incident occurred was closed, and the firm’s president made a painful apology (on YouTube, of course). Not all firms choose to aggressively engage social media. As of this writing some major brands still lack a notable social media presence (Apple comes immediately to mind). But your customers are there and they’re talking about your organization, its products, and its competitors. Your employees are there, too, and without guidance, they can step on a social grenade with your firm left to pick out the shrapnel. Soon, nearly everyone will carry the Internet in their pocket. Phones and MP3 players are armed with video cameras capable of recording every customer outrage, corporate blunder, ethical lapse, and rogue employee. Social media posts can linger forever online, like a graffiti tag attached to your firm’s reputation. Get used to it—that genie isn’t going back in the bottle. As the “United Breaks Guitars” and “Domino’s Gross Out” incidents show, social media will impact a firm whether it chooses to engage online or not. An awareness of the power of social media can shape customer support engagement and crisis response, and strong corporate policies on social media use might have given the clueless Domino’s pranksters a heads-up that their planned video would get them fired and arrested. Given the power of social media, it’s time for all firms to get SMART, creating a social media awareness and response team. While one size doesn’t fit all, this section details key issues behind SMART capabilities, including creating the social media team, establishing firmwide policies, monitoring activity inside and outside the firm, establishing the social media presence, and managing social media engagement and response. • Creating the Team Firms need to treat social media engagement as a key corporate function with clear and recognizable leadership within the organization. Social media is no longer an ad hoc side job or a task delegated to an intern. When McDonald’s named its first social media chief, the company announced that it was important to have someone “dedicated 100% of the time, rather than someone who’s got a day job on top of a day job” (York, 2010). Firms without social media baked into employee job functions often find that their online efforts are started with enthusiasm, only to suffer under a lack of oversight and follow-through. One hotel operator found franchisees were quick to create Facebook pages, but many rarely monitored them. Customers later notified the firm that unmonitored hotel “fan” pages contained offensive messages—a racist rant on one, paternity claims against an employee on another. Organizations with a clearly established leadership role for social media can help create consistency in firm dialogue; develop and communicate policy; create and share institutional knowledge; provide training, guidance, and suggestions; offer a place to escalate issues in the event of a crisis or opportunity; and catch conflicts that might arise if different divisions engage without coordination. While firms are building social media responsibility into job descriptions, also recognize that social media is a team sport that requires input from staffers throughout an organization. The social media team needs support from public relations, marketing, customer support, HR, legal, IT, and other groups, all while acknowledging that what’s happening in the social media space is distinct from traditional roles in these disciplines. The team will hone unique skills in technology, analytics, and design, as well as skills for using social media for online conversations, listening, trust building, outreach, engagement, and response. As an example of the interdisciplinary nature of social media practice, consider that the social media team at Starbucks (regarded by some as the best in the business) is organized under the interdisciplinary “vice president of brand, content, and online2.” Also note that while organizations with SMARTs (social media teams) provide leadership, support, and guidance, they don’t necessarily drive all efforts. GM’s social media team includes representatives from all the major brands. The idea is that employees in the divisions are still the best to engage online once they’ve been trained and given operational guardrails. Says GM’s social media chief, “I can’t go in to Chevrolet and tell them ‘I know your story better than you do, let me tell it on the Web’” (Barger, 2009)3. Similarly, the roughly fifty Starbucks “Idea Partners” who participate in MyStarbucksIdea are specialists. Part of their job is to manage the company’s social media. In this way, conversations about the Starbucks Card are handled by card team experts, and merchandise dialogue has a product specialist who knows that business best. Many firms find that the social media team is key for coordination and supervision (e.g., ensuring that different divisions don’t overload consumers with too much or inconsistent contact), but the dynamics of specific engagement still belong with the folks who know products, services, and customers best. • Responsibilities and Policy Setting In an age where a generation has grown up posting shoot-from-the-hip status updates and YouTube is seen as a fame vehicle for those willing to perform sensational acts, establishing corporate policies and setting employee expectations are imperative for all organizations. The employees who don’t understand the impact of social media on the firm can do serious damage to their employers and their careers (look to Domino’s for an example of what can go wrong). Many experts suggest that a good social media policy needs to be three things: “short, simple, and clear” (Soat, 2010). Fortunately, most firms don’t have to reinvent the wheel. Several firms, including Best Buy, IBM, Intel, The American Red Cross, and Australian telecom giant Telstra, have made their social media policies public. Most guidelines emphasize the “three Rs”: representation, responsibility, and respect. • Representation. Employees need clear and explicit guidelines on expectations for social media engagement. Are they empowered to speak on behalf of the firm? If they do, it is critical that employees transparently disclose this to avoid legal action. U.S. Federal Trade Commission rules require disclosure of relationships that may influence online testimonial or endorsement. On top of this, many industries have additional compliance requirements (e.g., governing privacy in the health and insurance fields, retention of correspondence and disclosure for financial services firms). Firms may also want to provide guidelines on initiating and conducting dialogue, when to respond online, and how to escalate issues within the organization. • Responsibility. Employees need to take responsibility for their online actions. Firms must set explicit expectations for disclosure, confidentiality and security, and provide examples of engagement done right, as well as what is unacceptable. An effective social voice is based on trust, so accuracy, transparency, and accountability must be emphasized. Consequences for violations should be clear. • Respect. Best Buy’s policy for its Twelpforce explicitly states participants must “honor our differences” and “act ethically and responsibly.” Many employees can use the reminder. Sure customer service is a tough task and every rep has a story about an unreasonable client. But there’s a difference between letting off steam around the water cooler and venting online. Virgin Atlantic fired thirteen of the airline’s staffers after they posted passenger insults and inappropriate inside jokes on Facebook (Conway, 2008). Policies also need to have teeth. Remember, a fourth “R” is at stake—reputation (both the firm’s and the employee’s). Violators should know the consequences of breaking firm rules and policies should be backed by action. Best Buy’s policy simply states, “Just in case you are forgetful or ignore the guidelines above, here’s what could happen. You could get fired (and it’s embarrassing to lose your job for something that’s so easily avoided).” Despite these concerns, trying to micromanage employee social media use is probably not the answer. At IBM, rules for online behavior are surprisingly open. The firm’s code of conduct reminds employees to remember privacy, respect, and confidentiality in all electronic communications. Anonymity is not permitted on IBM’s systems, making everyone accountable for their actions. As for external postings, the firm insists that employees not disparage competitors or reveal customers’ names without permission and asks that any employee posts from IBM accounts or that mention the firm also include disclosures indicating that opinions and thoughts shared publicly are the individual’s and not Big Blue’s. Some firms have more complex social media management challenges. Consider hotels and restaurants where outlets are owned and operated by franchisees rather than the firm. McDonald’s social media team provides additional guidance so that regional operations can create, for example, a Twitter handle (e.g., @mcdonalds_cincy) that handle a promotion in Cincinnati that might not run in other regions (York, 2010). A social media team can provide coordination while giving up the necessary control. Without this kind of coordination, customer communication can quickly become a mess. Training is also a critical part of the SMART mandate. GM offers an intranet-delivered video course introducing newbies to the basics of social media and to firm policies and expectations. GM also trains employees to become “social media proselytizers and teachers.” GM hopes this approach enables experts to interact directly with customers and partners, allowing the firm to offer authentic and knowledgeable voices online. Training should also cover information security and potential threats. Social media has become a magnet for phishing, virus distribution, and other nefarious online activity. Over one-third of social networking users claim to have been sent malware via social networking sites (see Chapter 13 “Information Security: Barbarians at the Gateway (and Just About Everywhere Else)”). The social media team will need to monitor threats and spread the word on how employees can surf safe and surf smart. Since social media is so public, it’s easy to amass examples of what works and what doesn’t, adding these to the firm’s training materials. The social media team provides a catch point for institutional knowledge and industry best practice; and the team can update programs over time as new issues, guidelines, technologies, and legislation emerge. The social media space introduces a tension between allowing expression (among employees and by the broader community) and protecting the brand. Firms will fall closer to one end or the other of this continuum depending on compliance requirements, comfort level, and goals. Expect the organization’s position to move. Firms will be cautious as negative issues erupt, others will jump in as new technologies become hot and early movers generate buzz and demonstrate results. But it’s the SMART responsibility to avoid knee-jerk reaction and to shepherd firm efforts with the professionalism and discipline of other management domains. Astroturfing and Sock Puppets Social media can be a cruel space. Sharp-tongued comments can shred a firm’s reputation and staff might be tempted to make anonymous posts defending or promoting the firm. Don’t do it! Not only is it a violation of FTC rules, IP addresses and other online breadcrumbs often leave a trail that exposes deceit. Whole Foods CEO John Mackey fell victim to this kind of temptation, but his actions were eventually, and quite embarrassingly, uncovered. For years, Mackey used a pseudonym to contribute to online message boards, talking up Whole Foods stock and disparaging competitors. When Mackey was unmasked, years of comments were publicly attributed to him. The New York Times cited one particularly cringe-worthy post where Mackey used the pseudonym to complement his own good looks, writing, “I like Mackey’s haircut. I think he looks cute” (Martin, 2007)! Fake personas set up to sing your own praises are known as sock puppets among the digerati, and the practice of lining comment and feedback forums with positive feedback is known as astroturfing. Do it and it could cost you. The firm behind the cosmetic procedure known as the Lifestyle Lift was fined \$300,000 in civil penalties after the New York Attorney General’s office discovered that the firm’s employees had posed as plastic surgery patients and wrote glowing reviews of the procedure (Miller, 2009). Review sites themselves will also take action. TripAdvisor penalizes firms if it’s discovered that customers are offered some sort of incentive for posting positive reviews. The firm also employs a series of sophisticated automated techniques as well as manual staff review to uncover suspicious activity. Violators risk penalties that include being banned from the service. Your customers will also use social media keep you honest. Several ski resorts have been embarrassed when tweets and other social media posts exposed them as overstating snowfall results. There’s even an iPhone app skiers can use to expose inaccurate claims (Rathke, 2010). So keep that ethical bar high—you never know when technology will get sophisticated enough to reveal wrongdoings.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/07%3A_Peer_Production_Social_Media_and_Web_2.0/7.09%3A_Get_SMART-_The_Social_Media_Awareness_and_Response_Team.txt
Learning Objectives After studying this section you should be able to do the following: 1. Be familiar with Facebook’s origins and rapid rise. 2. Understand how Facebook’s rapid rise has impacted the firm’s ability to raise venture funding and its founder’s ability to maintain a controlling interest in the firm. Here’s how much of a Web 2.0 guy Mark Zuckerberg is: during the weeks he spent working on Facebook as a Harvard sophomore, he didn’t have time to study for a course he was taking, “Art in the Time of Augustus,” so he built a Web site containing all of the artwork in class and pinged his classmates to contribute to a communal study guide. Within hours, the wisdom of crowds produced a sort of custom CliffsNotes for the course, and after reviewing the Web-based crib sheet, he aced the test. Turns out he didn’t need to take that exam, anyway. Zuck (that’s what the cool kids call him)1 dropped out of Harvard later that year. Zuckerberg is known as both a shy, geeky, introvert who eschews parties, and as a brash Silicon Valley bad boy. After Facebook’s incorporation, Zuckerberg’s job description was listed as “Founder, Master and Commander [and] Enemy of the State” (McGinn, 2004). An early business card read “I’m CEO…Bitch” (Hoffman, 2008). And let’s not forget that Facebook came out of drunken experiments in his dorm room, one of which was a system for comparing classmates to farm animals (Zuckerberg, threatened with expulsion, later apologized). For one meeting with Sequoia Capital, the venerable Menlo Park venture capital firm that backed Google and YouTube, Zuckerberg showed up in his pajamas (Hoffman, 2008). By the age of twenty-three, Mark Zuckerberg had graced the cover of Newsweek, been profiled on 60 Minutes, and was discussed in the tech world with a reverence previously reserved only for Steve Jobs and the Google guys, Sergey Brin and Larry Page. But Mark Zuckerberg’s star rose much faster than any of his predecessors. Just two weeks after Facebook launched, the firm had four thousand users. Ten months later it was up to one million. The growth continued, and the business world took notice. In 2006, Viacom (parent of MTV) saw that its core demographic was spending a ton of time on Facebook and offered to buy the firm for three quarters of a billion dollars. Zuckerberg passed (Rosenbush, 2006). Yahoo! offered up a cool billion (twice). Zuck passed again, both times. As growth skyrocketed, Facebook built on its stranglehold of the college market (over 85 percent of four-year college students are Facebook members), opening up first to high schoolers, then to everyone. Web hipsters started selling shirts emblazoned with “I Facebooked your Mom!” Even Microsoft wanted some of Facebook’s magic. In 2006, the firm temporarily locked up the right to broker all banner ad sales that run on the U.S. version of Facebook, guaranteeing Zuckerberg’s firm \$100 million a year through 2011. In 2007, Microsoft came back, buying 1.6 percent of the firm for \$240 million2. The investment was a shocker. Do the math and a 1.6 percent stake for \$240 million values Facebook at \$15 billion (more on that later). That meant that a firm that at the time had only five hundred employees, \$150 million in revenues, and was helmed by a twenty-three-year-old college dropout in his first “real job,” was more valuable than General Motors. Rupert Murdoch, whose News Corporation owns rival MySpace, engaged in a little trash talk, referring to Facebook as “the flavor of the month” (Morrissey, 2008). Watch your back, Rupert. Or on second thought, watch Zuckerberg’s. By spring 2009, Facebook had more than twice MySpace’s monthly unique visitors worldwide (Schonfeld, 2009); by June, Facebook surpassed MySpace in the United States3; by July, Facebook was cash-flow positive; and by February 2010 (when Facebook turned six), the firm had over four hundred million users, more than doubling in size in less than a year (Gage, 2009). Murdoch, the media titan who stood atop an empire that includes the Wall Street Journal and Fox, had been outmaneuvered by “the kid.” Why Study Facebook? Looking at the “flavor of the month” and trying to distinguish the reality from the hype is a critical managerial skill. In Facebook’s case, there are a lot of folks with a vested interest in figuring out where the firm is headed. If you want to work there, are you signing on to a firm where your stock options and 401k contributions are going to be worth something or worthless? If you’re an investor and Facebook goes public, should you short the firm or increase your holdings? Would you invest in or avoid firms that rely on Facebook’s business? Should your firm rush to partner with the firm? Would you extend the firm credit? Offer it better terms to secure its growing business, or worse terms because you think it’s a risky bet? Is this firm the next Google (underestimated at first, and now wildly profitable and influential), the next GeoCities (Yahoo! paid \$3 billion for it—no one goes to the site today), or the next Skype (deeply impactful with over half a billion accounts worldwide, but not much of a profit generator)? The jury is still out on all this, but let’s look at the fundamentals with an eye to applying what we’ve learned. No one has a crystal ball, but we do have some key concepts that can guide our analysis. There are a lot of broadly applicable managerial lessons that can be gleaned by examining Facebook’s successes and missteps. Studying the firm provides a context for examining nework effects, platforms, partnerships, issues in the rollout of new technologies, privacy, ad models, and more. Zuckerberg Rules! Many entrepreneurs accept start-up capital from venture capitalists (VCs), investor groups that provide funding in exchange for a stake in the firm, and often, a degree of managerial control (usually in the form of a voting seat or seats on the firm’s board of directors). Typically, the earlier a firm accepts VC money, the more control these investors can exert (earlier investments are riskier, so VCs can demand more favorable terms). VCs usually have deep entrepreneurial experience and a wealth of contacts, and can often offer important guidance and advice, but strong investor groups can oust a firm’s founder and other executives if they’re dissatisfied with the firm’s performance. At Facebook, however, Zuckerberg owns an estimated 20 percent to 30 percent of the company, and controls three of five seats on the firm’s board of directors. That means that he’s virtually guaranteed to remain in control of the firm, regardless of what investors say. Maintaining this kind of control is unusual in a start-up, and his influence is a testament to the speed with which Facebook expanded. By the time Zuckerberg reached out to VCs, his firm was so hot that he could call the shots, giving up surprisingly little in exchange for their money. Key Takeaways • Facebook was founded by a nineteen-year-old college sophomore and eventual dropout. • It is currently the largest social network in the world, boasting more than four hundred million members and usage rates that would be the envy of most media companies. The firm is now larger than MySpace in both the United States and worldwide. • The firm’s rapid rise is the result of network effects and the speed of its adoption placed its founder in a particularly strong position when negotiating with venture firms. As a result, Facebook founder Mark Zuckerberg retains significant influence over the firm. • While revenue prospects remain sketchy, some reports have valued the firm at \$15 billion, based largely on an extrapolation of a Microsoft stake. Questions and Exercises 1. Who started Facebook? How old was he then? Now? How much control does the founding CEO have over his firm? Why? 2. Which firms have tried to acquire Facebook? Why? What were their motivations and why did Facebook seem attractive? Do you think these bids are justified? Do you think the firm should have accepted any of the buyout offers? Why or why not? 3. As of late 2007, Facebook boasted an extremely high “valuation.” How much was Facebook allegedly “worth”? What was this calculation based on? 4. Why study Facebook? Who cares if it succeeds?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/08%3A_Facebook-_Building_a_Business_from_the_Social_Graph/8.01%3A_Introduction.txt
Learning Objectives After studying this section you should be able to do the following: 1. Recognize that Facebook’s power is allowing it to encroach on and envelop other Internet businesses. 2. Understand the concept of the “dark Web” and why some feel this may one day give Facebook a source of advantage vis-à-vis Google. 3. Understand the basics of Facebook’s infrastructure, and the costs required to power the effort. The prior era’s Internet golden boy, Netscape founder Marc Andreessen, has said that Facebook is “an amazing achievement one of the most significant milestones in the technology industry” (Vogelstein, 2007). While still in his twenties, Andreessen founded Netscape, eventually selling it to AOL for over \$4 billion. His second firm, Opsware, was sold to HP for \$1.6 billion. He joined Facebook’s Board of Directors within months of making this comment. Why is Facebook considered such a big deal? First there’s the growth: between December 2008 and 2009, Facebook was adding between six hundred thousand and a million users a day. It was as if every twenty-four hours, a group as big or bigger than the entire city of Boston filed into Facebook’s servers to set up new accounts. Roughly half of Facebook users visit the site every single day, (Gage, 2009) with the majority spending fifty-five minutes or more getting their daily Facebook fix1. And it seems that Mom really is on Facebook (Dad, too); users thirty-five years and older account for more than half of Facebook’s daily visitors and its fastest growing population (Hagel & Brown, 2008; Gage, 2009). Then there’s what these users are doing on the site: Facebook isn’t just a collection of personal home pages and a place to declare your allegiance to your friends. The integrated set of Facebook services encroaches on a wide swath of established Internet businesses. Facebook has become the first-choice messaging and chat service for this generation. E-mail is for your professors, but Facebook is for friends. In photos, Google, Yahoo! and MySpace all spent millions to acquire photo sharing tools (Picasa, Flickr, and Photobucket, respectively). But Facebook is now the biggest photo-sharing site on the Web, taking in some three billion photos each month1. And watch out, YouTube. Facebookers share eight million videos each month. YouTube will get you famous, but Facebook is a place most go to share clips you only want friends to see (Vogelstein, 2009). Facebook is a kingmaker, opinion catalyst, and traffic driver. While in the prior decade news stories would carry a notice saying, “Copyright, do not distribute without permission,” major news outlets today, including the New York Times, display Facebook icons alongside every copyrighted story, encouraging users to “share” the content on their profile pages via Facebook’s “Like” button, scattering it all over the Web. Like digital photos, video, and instant messaging, link sharing is Facebook’s sharp elbow to the competition. Suddenly, Facebook gets space on a page alongside Digg.com and Del.icio.us, even though those guys showed up first. Facebook Office? Facebook rolled out the document collaboration and sharing service Docs.com in partnership with Microsoft. Facebook is also hard at work on its own e-mail system (Blodget, 2010), music service (Kincaid, 2010), and payments mechanism (Maher, 2010). Look out, Gmail, Hotmail, Pandora, iTunes, PayPal, and Yahoo!—you may all be in Facebook’s path! As for search, Facebook’s got designs on that, too. Google and Bing index some Facebook content, but since much of Facebook is private, accessible only among friends, this represents a massive blind spot for Google search. Sites that can’t be indexed by Google and other search engines are referred to as the dark Web. While Facebook’s partnership with Microsoft currently offers Web search results through Bing.com, Facebook has announced its intention to offer its own search engine with real-time access to up-to-the-minute results from status updates, links, and other information made available to you by your friends. If Facebook can tie together standard Internet search with its dark Web content, this just might be enough for some to break the Google habit. And Facebook is political—in big, regime-threatening ways. The site is considered such a powerful tool in the activist’s toolbox that China, Iran, and Syria are among nations that have, at times, attempted to block Facebook access within their borders. Egyptians have used the site to protest for democracy. Saudi women have used it to lobby for driving privileges. ABC News cosponsored U.S. presidential debates with Facebook. And Facebook cofounder Chris Hughes was even recruited by the Obama campaign to create my.barackobama.com, a social media site considered vital in the 2008 U.S. presidential victory (Talbot, 2008; McGirt, 2009). So What’s It Take to Run This Thing? The Facebook cloud (the big group of connected servers that power the site) is scattered across multiple facilities, including server farms in San Francisco, Santa Clara, and northern Virginia (Zeichick, 2008). The innards that make up the bulk of the system aren’t that different from what you’d find on a high-end commodity workstation. Standard hard drives and eight core Intel processors—just a whole lot of them lashed together through networking and software. Much of what powers the site is open source software (OSS). A good portion of the code is in PHP (a scripting language particularly well-suited for Web site development), while the databases are in MySQL (a popular open source database). Facebook also developed Cassandra, a non-SQL database project for large-scale systems that the firm has since turned over to the open source Apache Software Foundation. The object cache that holds Facebook’s frequently accessed objects is in chip-based RAM instead of on slower hard drives and is managed via an open source product called Memcache. Other code components are written in a variety of languages, including C++, Java, Python, and Ruby, with access between these components managed by a code layer the firm calls Thrift (developed at Facebook, which was also turned over to the Apache Software Foundation). Facebook also developed its own media serving solution, called Haystack. Haystack coughs up photos 50 percent faster than more expensive, proprietary solutions, and since it’s done in-house, it saves Facebook costs that other online outlets spend on third-party content delivery networks (CDN) like Akamai. Facebook receives some fifty million requests per second (Gaudin, 2009), yet 95 percent of data queries can be served from a huge, distributed server cache that lives in over fifteen terabytes of RAM (objects like video and photos are stored on hard drives) (Zeichick, 2008). Hot stuff (literally), but it’s not enough. The firm raised several hundred million dollars more in the months following the fall 2007 Microsoft deal, focused largely on expanding the firm’s server network to keep up with the crush of growth. The one hundred million dollars raised in May 2008 was “used entirely for servers” (Ante, 2008). Facebook will be buying them by the thousands for years to come. And it’ll pay a pretty penny to keep things humming. Estimates suggest the firm spends \$1 million a month on electricity, another half million a month on telecommunications bandwidth, and at least fifteen million dollars a year in office and data center rental payments (Arrington, 2009). Key Takeaways • Facebook’s position as the digital center of its members’ online social lives has allowed the firm to envelop related businesses such as photo and video sharing, messaging, bookmarking, and link sharing. Facebook has opportunities to expand into other areas as well. • Much of the site’s content is in the dark Web, unable to be indexed by Google or other search engines. Some suggest this may create an opportunity for Facebook to challenge Google in search. • Facebook can be a vital tool for organizers—presenting itself as both opportunity and threat to those in power, and an empowering medium for those seeking to bring about change. • Facebook’s growth requires a continued and massive infrastructure investment. The site is powered largely on commodity hardware, open source software, and proprietary code tailored to the specific needs of the service. Questions and Exercises 1. What is Facebook? How do people use the site? What do they “do” on Facebook? 2. What markets has Facebook entered? What factors have allowed the firm to gain share in these markets at the expense of established firms? In what ways does it enjoy advantages that a traditional new entrant in such markets would not? 3. What is the “dark Web” and why is it potentially an asset to Facebook? Why is Google threatened by Facebook’s dark Web? What firms might consider an investment in the firm, if it provided access to this asset? Do you think the dark Web is enough to draw users to a Facebook search product over Google? Why or why not? 4. As Facebook grows, what kinds of investments continue to be necessary? What are the trends in these costs over time? Do you think Facebook should wait in making these investments? Why or why not? 5. Investments in servers and other capital expenses typically must be depreciated over time. What does this imply about how the firm’s profitability is calculated? 6. How have media attitudes toward their copyrighted content changed over the past decade? Why is Facebook a potentially significant partner for firms like the New York Times? What does the Times stand to gain by encouraging “sharing” its content? What do newspapers and others sites really mean when they encourage sites to “share?” What actually is being passed back and forth? Do you think this ultimately helps or undermines the Times and other newspaper and magazine sites? Why? 1“Facebook Facts and Figures (History and Statistics),” Website Monitoring Blog, March 17, 2010.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/08%3A_Facebook-_Building_a_Business_from_the_Social_Graph/8.02%3A_Whats_the_Big_Deal.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand the concept of feeds, why users rebelled against Facebook feeds, and why users eventually embraced this feature. 2. Recognize the two strategic resources that are most critical to Facebook’s competitive advantage and why Facebook was able to create these resources while MySpace has fallen short. 3. Appreciate that while Facebook’s technology can be easily copied, barriers to sustain any new entrant are extraordinarily high, and the likelihood that a firm will win significant share from Facebook by doing the same thing is considerably remote. At the heart of Facebook’s appeal is a concept Zuckerberg calls the social graph, which refers to Facebook’s ability to collect, express, and leverage the connections between the site’s users, or as some describe it, “the global mapping of everyone and how they’re related” (Iskold, 2007). Think of all the stuff that’s on Facebook as a node or endpoint that’s connected to other stuff. You’re connected to other users (your friends), photos about you are tagged, comments you’ve posted carry your name, you’re a member of groups, you’re connected to applications you’ve installed—Facebook links them all (Zeichick, 2008). While MySpace and Facebook are often mentioned in the same sentence, from their founding these sites were conceived differently. It goes beyond the fact that Facebook, with its neat, ordered user profiles, looks like a planned community compared to the garish, Vegas-like free-for-all of MySpace. MySpace was founded by musicians seeking to reach out to unknown users and make them fans. It’s no wonder the firm, with its proximity to Los Angeles and ownership by News Corporation, is viewed as more of a media company. It has cut deals to run network television shows on its site, and has even established a record label. It’s also important to note that from the start anyone could create a MySpace identity, and this open nature meant that you couldn’t always trust what you saw. Rife with bogus profiles, even News Corporation’s Rupert Murdoch has had to contend with the dozens of bogus Ruperts who have popped up on the service (Petrecca, 2006)! Facebook, however, was established in the relatively safe cocoon of American undergraduate life, and was conceived as a place where you could reinforce contacts among those who, for the most part, you already knew. The site was one of the first social networks where users actually identified themselves using their real names. If you wanted to establish that you worked for a certain firm or were a student of a particular university, you had to verify that you were legitimate via an e-mail address issued by that organization. It was this “realness” that became Facebook’s distinguishing feature—bringing along with it a degree of safety and comfort that enabled Facebook to become a true social utility and build out a solid social graph consisting of verified relationships. Since “friending” (which is a link between nodes in the social graph) required both users to approve the relationship, the network fostered an incredible amount of trust. Today, many Facebook users post their cell phone numbers and their birthdays, offer personal photos, and otherwise share information they’d never do outside their circle of friends. Because of trust, Facebook’s social graph is stronger than MySpace’s. There is also a strong network effect to Facebook (see Chapter 6 “Understanding Network Effects”). People are attracted to the service because others they care about are more likely to be there than anywhere else online. Without the network effect Facebook wouldn’t exist. And it’s because of the network effect that another smart kid in a dorm can’t rip off Zuckerberg in any market where Facebook is the biggest fish. Even an exact copy of Facebook would be a virtual ghost town with no social graph (see Note 8.23 “It’s Not the Technology” below). The switching costs for Facebook are also extremely powerful. A move to another service means recreating your entire social graph. The more time you spend on the service, the more you’ve invested in your graph and the less likely you are to move to a rival. It’s Not the Technology Does your firm have Facebook envy? KickApps, an eighty-person start-up in Manhattan, will give you the technology to power your own social network. All KickApps wants is a cut of the ads placed around your content. In its first two years, the site has provided the infrastructure for twenty thousand “mini Facebooks,” registering three hundred million page views a month (Urstadt, 2008). NPR, ABC, AutoByTel, Harley-Davidson, and Kraft all use the service (social networks for Cheez Whiz?). There’s also Ning, which has enabled users to create over 2.3 million mini networks organized on all sorts of topics as diverse as church groups, radio personalities, vegans, diabetes sufferers networks limited to just family members. Or how about the offering from Agriya Infoway, based in Chennai, India? The firm will sell you Kootali, a software package that lets developers replicate Facebook’s design and features, complete with friend networks, photos, and mini-feeds. They haven’t stolen any code, but they have copied the company’s look and feel. Those with Zuckerberg ambitions can shell out the four hundred bucks for Kootali. Sites with names like Faceclub.com and Umicity.com have done just that—and gone nowhere. Mini networks that extend the conversation (NPR) or make it easier to find other rabidly loyal product fans (Harley-Davidson) may hold a niche for some firms. And Ning is a neat way for specialized groups to quickly form in a secure environment that’s all their own (it’s just us, no “creepy friends” from the other networks). While every market has a place for its niches, none of these will grow to compete with the dominant social networks. The value isn’t in the technology; it’s in what the technology has created over time. For Facebook, it’s a huge user base that (for now at least) is not going anywhere else. Key Takeaways • The social graph expresses the connections between individuals and organizations. • Trust created through user verification and friend approval requiring both parties to consent encouraged Facebook users to share more and helped the firm establish a stronger social graph than MySpace or other social networking rivals. • Facebook’s key resources for competitive advantage are network effects and switching costs. These resources make it extremely difficult for copycat firms to steal market share from Facebook. Questions and Exercises 1. Which is bigger, Facebook or MySpace? How are these firms different? Why would a person or organization be attracted to one service over another? 2. What is the social graph? Why is Facebook’s social graph considered to be stronger than the social graph available to MySpace users? 3. In terms of features and utility, how are Facebook and MySpace similar? How are they different? Why would a user choose to go to one site instead of another? Are you a member of either of these sites? Both? Why? Do you feel that they are respectively pursuing lucrative markets? Why or why not? If given the opportunity, would you invest in either firm? Why or why not? 4. If you were a marketer, which firm would you target for an online advertising campaign—Facebook or MySpace? Why? 5. Does Facebook have to worry about copycat firms from the United States? In overseas markets? Why or why not? If Facebook has a source (or sources) of competitive advantage, explain these. If it has no advantage, discuss why.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/08%3A_Facebook-_Building_a_Business_from_the_Social_Graph/8.03%3A_The_Social_Graph.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand the concept of feeds, why users rebelled, and why users eventually embraced this feature. 2. Recognize the role of feeds in viral promotions, catalyzing innovation, and supporting rapid organizing. While the authenticity and trust offered by Facebook was critical, offering News Feeds concentrated and released value from the social graph. With feeds, each time a user performs an activity in Facebook—makes a friend, uploads a picture, joins a group—the feed blasts this information to all of your friends in a reverse chronological list that shows up right when they next log on. An individual user’s activities are also listed within a mini feed that shows up on their profile. Get a new job, move to a new city, read a great article, have a pithy quote—post it to Facebook—the feed picks it up, and the world of your Facebook friends will get an update. Feeds are perhaps the linchpin of Facebook’s ability to strengthen and deliver user value from the social graph, but for a brief period of time it looked like feeds would kill the company. News Feeds were launched on September 5, 2006, just as many of the nation’s undergrads were arriving on campus. Feeds reflecting any Facebook activity (including changes to the relationship status) became a sort of gossip page splashed right when your friends logged in. To many, feeds were first seen as a viral blast of digital nosiness—a release of information they hadn’t consented to distribute widely. And in a remarkable irony, user disgust over the News Feed ambush offered a whip-crack demonstration of the power and speed of the feed virus. Protest groups formed, and every student who, for example, joined a group named Students Against Facebook News Feed, had this fact blasted to their friends (along with a quick link where friends, too, could click to join the group). Hundreds of thousands of users mobilized against the firm in just twenty-four hours. It looked like Zuckerberg’s creation had turned on him, Frankenstein style. The first official Facebook blog post on the controversy came off as a bit condescending (never a good tone to use when your customers feel that you’ve wronged them). “Calm down. Breathe. We hear you,” wrote Zuckerberg on the evening of September 5. The next post, three days after the News Feed launch, was much more contrite (“We really messed this one up,” he wrote). In the 484-word open letter, Zuckerberg apologized for the surprise, explaining how users could opt out of feeds. The tactic worked, and the controversy blew over (Vogelstein, 2007). The ability to stop personal information from flowing into the feed stream was just enough to stifle critics, and as it turns out, a lot of people really liked the feeds and found them useful. It soon became clear that if you wanted to use the Web to keep track of your social life and contacts, Facebook was the place to be. Not only did feeds not push users away, by the start of the next semester subscribers had nearly doubled! Key Takeaways • Facebook feeds foster the viral spread of information and activity. • Feeds were initially unwanted by many Facebook users. Feeds themselves helped fuel online protests against the feed feature. • Today feeds are considered one of the most vital, value-adding features to Facebook and other social networking sites. • Users often misperceive technology and have difficulty in recognizing an effort’s value (as well as its risks). They have every right to be concerned and protective of their privacy. It is the responsibility of firms to engage users on new initiatives and to protect user privacy. Failure to do so risks backlash. Questions and Exercises 1. What is the “linchpin” of Facebook’s ability to strengthen and deliver user-value from the social graph? 2. How did users first react to feeds? What could Facebook have done to better manage the launch? 3. How do you feel about Facebook feeds? Have you ever been disturbed by information about you or someone else that has appeared in the feed? Did this prompt action? Why or why not? 4. Visit Facebook and experiment with privacy settings. What kinds of control do you have over feeds and data sharing? Is this enough to set your mind at ease? Did you know these settings existed before being prompted to investigate features? 5. What other Web sites are leveraging features that mimic Facebook feeds? Do you think these efforts are successful or not? Why?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/08%3A_Facebook-_Building_a_Business_from_the_Social_Graph/8.04%3A_Facebook_FeedsEbola_for_Data_Flows.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand how Facebook created a platform and the potential value this offers the firm. 2. Recognize that running a platform also presents a host of challenges to the platform operator. In May 2007, Facebook followed News Feeds with another initiative that set it head and shoulders above its competition. At the firm’s first f8 (pronounced “fate”) Developers Conference, Mark Zuckerberg stood on stage and announced that he was opening up the screen real estate on Facebook to other application developers. Facebook published a set of application programming interfaces (APIs) that specified how programs could be written to run within and interact with Facebook. Now any programmer could write an application that would run inside a user’s profile. Geeks of the world, Facebook’s user base could be yours! Just write something good. Developers could charge for their wares, offer them for free, and even run ads. And Facebook let developers keep what they made (Facebook does revenue share with app vendors for some services, such as the Facebook Credits payment service, mentioned later). This was a key distinction; MySpace initially restricted developer revenue on the few products designed to run on their site, at times even blocking some applications. The choice was clear, and developers flocked to Facebook. To promote the new apps, Facebook would run an Applications area on the site where users could browse offerings. Even better, News Feed was a viral injection that spread the word each time an application was installed. Your best friend just put up a slide show app? Maybe you’ll check it out, too. The predictions of \$1 billion in social network ad spending were geek catnip, and legions of programmers came calling. Apps could be cobbled together on the quick, feeds made them spread like wildfire, and the early movers offered adoption rates never before seen by small groups of software developers. People began speaking of the Facebook Economy. Facebook was considered a platform. Some compared it to the next Windows, Zuckerberg the next Gates (hey, they both dropped out of Harvard, right?). And each application potentially added more value and features to the site without Facebook lifting a finger. The initial event launched with sixty-five developer partners and eighty-five applications. There were some missteps along the way. Some applications were accused of spamming friends with invites to install them. There were also security concerns and apps that violated the intellectual property of other firms (see the “Scrabulous” sidebar below), but Facebook worked to quickly remove errant apps, improve the system, and encourage developers. Just one year in, Facebook had marshaled the efforts of some four hundred thousand developers and entrepreneurs, twenty-four thousand applications had been built for the platform, 140 new apps were being added each day, and 95 percent of Facebook members had installed at least one Facebook application. As Sarah Lacy, author of Once You’re Lucky, Twice You’re Good, put it, “with one masterstroke, Zuck had mobilized all of Silicon Valley to innovate for him.” With feeds to spread the word, Facebook was starting to look like the first place to go to launch an online innovation. Skip the Web, bring it to Zuckerberg’s site first. Consider iLike: within the first three months, the firm saw installs of its Facebook app explode to seven million, more than doubling the number of users the firm was able to attract through the Web site it introduced the previous year. ILike became so cool that by September, platinum rocker KT Tunstall was debuting tracks through the Facebook service. A programmer named Mark Pincus wrote a Texas hold ’em game at his kitchen table (Guynn, 2007). Today his social gaming firm, Zynga, is a powerhouse—a profitable firm with over three dozen apps, over 230 million users (MacMillan, 2009), and more than \$600 million in annual revenue (Learmonth & Klaasen, 2009; Carlson & Angelova, 2010). Zynga games include MafiaWars, Vampires, and the wildly successful FarmVille, which boasts some twenty times the number of actual farms in the United States. App firm Slide (started by PayPal cofounder Max Levchin) scored investments from Legg Mason, and Fidelity pegged the firm’s value at \$500 million (Hempel & Copeland, 2008). Playfish, the U.K. social gaming firm behind the Facebook hits Pet Society and Restaurant City, was snapped up by Electronic Arts for \$300 million with another \$100 million due if the unit hits performance targets. Lee Lorenzen, founder of Altura Ventures, an investment firm exclusively targeting firms creating Facebook apps, said, “Facebook is God’s gift to developers. Never has the path from a good idea to millions of users been shorter” (Guynn, 2007). I Majored in Facebook Once Facebook became a platform, Stanford professor BJ Fogg thought it would be a great environment for a programming class. In ten weeks his seventy-three students built a series of applications that collectively received over sixteen million installs. By the final week of class, several applications developed by students, including KissMe, Send Hotness, and Perfect Match, had received millions of users, and class apps collectively generated more than half a million dollars in ad revenue. At least three companies were formed from the course. But legitimate questions remain. Are Facebook apps really a big deal? Just how important will apps be to adding sustained value within Facebook? And how will firms leverage the Facebook framework to extract their own value? A chart from FlowingData showed the top category, Just for Fun, was larger than the next four categories combined. That suggests that a lot of applications are faddish time wasters. Yes, there is experimentation beyond virtual Zombie Bites. Visa has created a small business network on Facebook (Facebook had some eighty thousand small businesses online at the time of Visa’s launch). Educational software firm Blackboard offered an application that will post data to Facebook pages as soon as there are updates to someone’s Blackboard account (new courses, whether assignments or grades have been posted, etc.). We’re still a long way from Facebook as a Windows rival, but the platform helped push Facebook to number one, and it continues to deliver quirky fun (and then some) supplied by thousands of developers off its payroll. Scrabulous Rajat and Jayant Agarwalla, two brothers in Kolkata, India, who run a modest software development company, decided to write a Scrabble clone as a Facebook application. The app, named Scrabulous, was social—users could invite friends to play, or they could search for new players looking for an opponent. Their application was a smash, snagging three million registered users and seven hundred thousand players a day after just a few months. Scrabulous was featured in PC World’s 100 best products of 2008, received coverage in the New York Times, Newsweek, and Wired, and was pulling in about twenty-five thousand dollars a month from online advertising. Way to go, little guys (Timmons, 2008)! There is only one problem: the Agarwalla brothers didn’t have the legal rights to Scrabble, and it was apparent to anyone that from the name to the tiles to the scoring—this was a direct rip-off of the well-known board game. Hasbro owns the copyright to Scrabble in the United States and Canada; Mattel owns it everywhere else. Thousands of fans joined Facebook groups with names like “Save Scrabulous” and “Please God, I Have So Little: Don’t Take Scrabulous, Too.” Users in some protest groups pledged never to buy Hasbro games if Scrabulous was stopped. Even if the firms wanted to succumb to pressure and let the Agarwalla brothers continue, they couldn’t. Both Electronic Arts and RealNetworks have contracted with the firms to create online versions of the game. While the Facebook Scrabulous app is long gone, the tale shows just one of the challenges of creating a platform. In addition to copyright violations, app makers have crafted apps that annoy, raise privacy and security concerns, purvey pornography, or otherwise step over the boundaries of good taste. Firms from Facebook to Apple (through its iTunes Store) have struggled to find the right mix of monitoring, protection, and approval while avoiding cries of censorship. Key Takeaways • Facebook’s platform allows the firm to further leverage the network effect. Developers creating applications create complementary benefits that have the potential to add value to Facebook beyond what the firm itself provides to its users. • There is no revenue-sharing mandate among platform partners—whatever an application makes can be kept by its developers (although Facebook does provide some services via revenue sharing, such as Facebook Credits). • Most Facebook applications are focused on entertainment. The true, durable, long-term value of Facebook’s platform remains to be seen. • Despite this, some estimates claim Facebook platform developers earned more than Facebook itself in 2009. • Running a platform can be challenging. Copyright, security, appropriateness, free speech tensions, efforts that tarnish platform operator brands, privacy, and the potential for competition with partners, all can make platform management more complex than simply creating a set of standards and releasing this to the public. Questions and Exercises 1. Why did more developers prefer to write apps for Facebook than for MySpace? 2. What competitive asset does the application platform initiative help Facebook strengthen? For example, how do apps make Facebook stronger when compared to rivals? 3. What’s Scrabulous? Did the developers make money? What happened to the firm and why? 4. Have you used Facebook apps? Which are your favorites? What makes them successful? 5. Leverage your experience or conduct additional research—are there developers who you feel have abused the Facebook app network? Why? What is Facebook’s responsibility (if any) to control such abuse? 6. How do most app developers make money? Have you ever helped a Facebook app developer earn money? How or why not? 7. How do Facebook app revenue opportunities differ from those leveraged by a large portion of iTunes Store apps?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/08%3A_Facebook-_Building_a_Business_from_the_Social_Graph/8.05%3A_Facebook_as_a_Platform.txt
Learning Objectives After studying this section you should be able to do the following: 1. Describe the differences in the Facebook and Google ad models. 2. Explain the Hunt versus Hike metaphor, contrast the relative success of ad performance on search compared to social networks, and understand the factors behind the latter’s struggles. 3. Recognize how firms are leveraging social networks for brand and product engagement, be able to provide examples of successful efforts, and give reasons why such engagement is difficult to achieve. If Facebook is going to continue to give away its services for free, it needs to make money somehow. Right now the bulk of revenue comes from advertising. Fortunately for the firm, online advertising is hot. For years, online advertising has been the only major media category that has seen an increase in spending (see Chapter 14 “Google: Search, Online Advertising, and Beyond”). Firms spend more advertising online than they do on radio and magazine ads, and the Internet will soon beat out spending on cable TV (Sweeney, 2008; Wayne, 2010). But not all Internet advertising is created equal. And there are signs that social networking sites are struggling to find the right ad model. Google founder Sergey Brin sums up this frustration, saying, “I don’t think we have the killer best way to advertise and monetize social networks yet,” that social networking ad inventory as a whole was proving problematic and that the “monetization work we were doing [in social media] didn’t pan out as well as we had hoped1.” When Google ad partner Fox Interactive Media (the News Corporation division that contains MySpace) announced that revenue would fall \$100 million short of projections, News Corporation’s stock tumbled 5 percent, analysts downgraded the company, and the firm’s chief revenue officer was dismissed (Stelter, 2008). Why aren’t social networks having the success of Google and other sites? Problems advertising on these sites include content adjacency, and user attention. The content adjacency problem refers to concern over where a firm’s advertisements will run. Consider all of the questionable titles in social networking news groups. Do advertisers really want their ads running alongside conversations that are racy, offensive, illegal, or that may even mock their products? This potential juxtaposition is a major problem with any site offering ads adjacent to free-form social media. Summing up industry wariness, one P&G manager said, “What in heaven’s name made you think you could monetize the real estate in which somebody is breaking up with their girlfriend?” (Stone, 2008) An IDC report suggests that it’s because of content adjacency that “brand advertisers largely consider user-generated content as low-quality, brand-unsafe inventory” for running ads (Stross, 2008). Now let’s look at the user attention problem. Attention Challenges: The Hunt Versus The Hike In terms of revenue model, Facebook is radically different from Google and the hot-growth category of search advertising. Users of Google and other search sites are on a hunt—a task-oriented expedition to collect information that will drive a specific action. Search users want to learn something, buy something, research a problem, or get a question answered. To the extent that the hunt overlaps with ads, it works. Just searched on a medical term? Google will show you an ad from a drug company. Looking for a toy? You’ll see Google ads from eBay sellers and other online shops. Type in a vacation destination and you get a long list of ads from travel providers aggressively courting your spending. Even better, Google only charges text advertisers when a user clicks through. No clicks? The ad runs at no cost to the firm. From a return on investment perspective, this is extraordinarily efficient. How often do users click on Google ads? Enough for this to be the single most profitable activity among any Internet firm. In 2009, Google revenue totaled nearly \$24 billion. Profits exceeded \$6.5 billion, almost all of this from pay-per-click ads (see Chapter 14 “Google: Search, Online Advertising, and Beyond” for more details). While users go to Google to hunt, they go to Facebook as if they were going on a hike—they have a rough idea of what they’ll encounter, but they’re there to explore and look around, enjoy the sights (or site). They’ve usually allocated time for fun and they don’t want to leave the terrain when they’re having conversations, looking at photos or videos, and checking out updates from friends. These usage patterns are reflected in click-through rates. Google users click on ads around 2 percent of the time (and at a much higher rate when searching for product information). At Facebook, click-throughs are about 0.04 percent (Urstadt, 2008). Most banner ads don’t charge per click but rather CPM (cost per thousand) impressions (each time an ad appears on someone’s screen). But Facebook banner ads performed so poorly that the firm pulled them in early 2010 (McCarthy, 2010). Lookery, a one-time ad network that bought ad space on Facebook in bulk, had been reselling inventory at a CPM of 7.5 cents (note that Facebook does offer advertisers pay-per-click as well as impression-based, or CPM, options). Even Facebook ads with a bit of targeting weren’t garnering much (Facebook’s Social Ads, which allow advertisers to target users according to location and age, have a floor price of fifteen cents CPM) (Urstadt, 2008; Schonfeld, 2008). Other social networks also suffered. In 2008, MySpace lowered its banner ad rate from \$3.25 CPM to less than two dollars. By contrast, information and news-oriented sites do much better, particularly if these sites draw in a valuable and highly targeted audience. The social networking blog Mashable has CPM rates ranging between seven and thirty-three dollars. Technology Review magazine boasts a CPM of seventy dollars. TechTarget, a Web publisher focusing on technology professionals, has been able to command CPM rates of one hundred dollars and above (an ad inventory that valuable helped the firm go public in 2007). Getting Creative with Promotions: Does It Work? Facebook and other social networks are still learning what works. Ad inventory displayed on high-traffic home pages have garnered big bucks for firms like Yahoo! With Facebook offering advertisers greater audience reach than most network television programs, there’s little reason to suggest that chunks of this business won’t eventually flow to the social networks. But even more interesting is how Facebook and widget sites have begun to experiment with relatively new forms of advertising. Many feel that Facebook has a unique opportunity to get consumers to engage with their brand, and some initial experiments point where this may be heading. Many firms have been leveraging so-called engagement ads by making their products part of the Facebook fun. Using an engagement ad, a firm can set up a promotion where a user can do things such as “Like” or become a fan of a brand, RSVP to an event and invite others, watch and comment on a video and see what your friends have to say, send a “virtual gift” with a personal message, or answer a question in a poll. The viral nature of Facebook allows actions to flow back into the news feed and spread among friends. COO Sheryl Sandberg discussed Ben & Jerry’s promotion for the ice cream chain’s free cone day event. To promote the upcoming event, Ben & Jerry’s initially contracted to make two hundred and fifty thousand “gift cones” available to Facebook users; they could click on little icons that would gift a cone icon to a friend, and that would show up in their profile. Within a couple of hours, customers had sent all two hundred and fifty thousand virtual cones. Delighted, Ben & Jerry’s bought another two hundred and fifty thousand cones. Within eleven hours, half a million people had sent cones, many making plans with Facebook friends to attend the real free cone day. The day of the Facebook promotion, Ben & Jerry’s Web site registered fifty-three million impressions, as users searched for store locations and wrote about their favorite flavors (Hardy, 2008). The campaign dovetailed with everything Facebook was good at: it was viral, generating enthusiasm for a promotional event and even prompting scheduling. In other promotions, Honda gave away three quarters of a million hearts during a Valentine’s Day promo (Sandberg, 2009), and the Dr. Pepper Snapple Group offered two hundred and fifty thousand virtual Sunkist sodas, which earned the firm one hundred thirty million brand impressions in twenty-two hours. Says Sunkist’s brand manager, “A Super Bowl ad, if you compare it, would have generated somewhere between six to seven million” (Wong, 2008). Facebook, Help Get Me a Job! The papers are filled with stories about employers scouring Facebook for dirt on potential hires. But one creative job seeker turned the tables and used Facebook to make it easier for firms to find him. Recent MBA graduate Eric Barker, a talented former screenwriter with experience in the film and gaming industry, bought ads promoting himself on Facebook, setting them up to run only on the screens of users identified as coming from firms he’d like to work for. In this way, someone Facebook identified as being from Microsoft would see an ad from Eric declaring “I Want to Be at Microsoft” along with an offer to click and learn more. The cost to run the ads was usually less than \$5 a day. Said Barker, “I could control my bid price and set a cap on my daily spend. Starbucks put a bigger dent in my wallet than promoting myself online.” The ads got tens of thousands of impressions, hundreds of clicks, and dozens of people called offering assistance. Today, Eric Barker is gainfully employed at a “dream job” in the video game industry2 (Sentementes, 2010). Figure 8.1 Eric Barker used Facebook to advertise himself to prospective employers. Of course, even with this business, Facebook may find that it competes with widget makers. Unlike Apple’s App Store (where much of developer-earned revenue comes from selling apps), the vast majority of Facebook apps are free and supported by ads. That means Facebook and its app providers are both running at a finite pot of advertising dollars. Slide’s Facebook apps have attracted top-tier advertisers, such as Coke and Paramount Pictures—a group Facebook regularly courts as well. By some estimates, in 2009, Facebook app developers took in well over half a billion dollars—exceeding Facebook’s own haul (Learmonth & Klaasen, 2009). And there’s controversy. Zynga was skewered in the press when some of its partners were accused of scamming users into signing up for subscriptions or installing unwanted software in exchange for game credits (Zynga has since taken steps to screen partners and improve transparency) (Arrington, 2009). While these efforts might be innovative, are they even effective? Some of these programs are considered successes; others, not so much. Jupiter Research surveyed marketers trying to create a viral impact online and found that only about 15 percent of these efforts actually caught on with consumers (Cowan, 2008). While the Ben & Jerry’s gift cones were used up quickly, a visit to Facebook in the weeks after this campaign saw CareerBuilder, Wide Eye Caffeinated Spirits, and Coors Light icons lingering days after their first appearance. Brands seeking to deploy their own applications in Facebook have also struggled. New Media Age reported that applications rolled out by top brands such as MTV, Warner Bros., and Woolworths were found to have as little as five daily users. Congestion may be setting in for all but the most innovative applications, as standing out in a crowd of over 550,000 applications becomes increasingly difficult3. Consumer products giant P&G has been relentlessly experimenting with leveraging social networks for brand engagement, but the results show what a tough slog this can be. The firm did garner fourteen thousand Facebook “fans” for its Crest Whitestrips product, but those fans were earned while giving away free movie tickets and other promos. The New York Times quipped that with those kinds of incentives, “a hemorrhoid cream” could have attracted a similar group of “fans.” When the giveaways stopped, thousands promptly “unfanned” Whitestrips. Results for Procter & Gamble’s “2X Ultra Tide” fan page were also pretty grim. P&G tried offbeat appeals for customer-brand bonding, including asking Facebookers to post “their favorite places to enjoy stain-making moments.” But a check eleven months after launch had garnered just eighteen submissions, two from P&G, two from staffers at spoof news site The Onion, and a bunch of short posts such as “Tidealicious!” (Stross, 2008) Efforts around engagement opportunities like events (Ben & Jerry’s) or products consumers are anxious to identify themselves with (a band or a movie) may have more success than trying to promote consumer goods that otherwise offer little allegiance, but efforts are so new that metrics are scarce, impact is tough to gauge, and best practices are still unclear. Facebook Engagement Ads http://www.facebook.com/video/video....v=629649849493 Source: Facebook. Key Takeaways • Content adjacency and user attention make social networking ads less attractive than search and professionally produced content sites. • Google enjoys significantly higher click-through rates than Facebook. • Display ads are often charged based on impression. Social networks also offer lower CPM rates than many other, more targeted Web sites. • Social networking has been difficult to monetize, as users are online to engage friends, not to hunt for products or be drawn away by clicks. • Many firms have begun to experiment with engagement ads. While there have been some successes, engagement campaigns often haven’t yielded significant results. Questions and Exercises 1. How are most display ads billed? What acronym is used to describe pricing of most display ads? 2. How are most text ads billed? 3. Contrast Facebook and Google click-through rates. Contrast Facebook CPMs with CPMs at professional content sites. Why the discrepancy? 4. What is the content adjacency problem? Search for examples of firms that have experienced embracement due to content adjacency—describe them, why they occurred, and if site operators could have done something to reduce the likelihood these issues could have occurred. 5. What kinds of Web sites are most susceptible to content adjacency? Are news sites? Why or why not? What sorts of technical features might act as breeding grounds for content adjacency problems? 6. If a firm removed user content because it was offensive to an advertiser, what kinds of problems might this create? When (if ever) should a firm remove or take down user content? 7. How are firms attempting to leverage social networks for brand and product engagement? 8. What are the challenges that social networking sites face when trying to woo advertisers? 9. Describe an innovative marketing campaign that has leveraged Facebook or other social networking sites. What factors made this campaign work? Are all firms likely to have this sort of success? Why or why not? 10. Have advertisers ever targeted you when displaying ads on Facebook? How were you targeted? What did you think of the effort?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/08%3A_Facebook-_Building_a_Business_from_the_Social_Graph/8.06%3A_Advertising_and_Social_Networks-_A_Work_in_Progress.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand the difference between opt-in and opt-out efforts. 2. Recognize how user issues and procedural implementation can derail even well-intentioned information systems efforts. 3. Recognize the risks in being a pioneer associated with new media efforts, and understand how missteps led to Facebook and its partners being embarrassed (and in some cases sued) by Beacon’s design and rollout issues. Conventional advertising may grow into a great business for Facebook, but the firm was clearly sitting on something that was unconventional compared to prior generations of Web services. Could the energy and virulent nature of social networks be harnessed to offer truly useful, consumer information to its users? Word of mouth is considered the most persuasive (and valuable) form of marketing (Kumar, et. al., 2007), and Facebook was a giant word of mouth machine. What if the firm worked with vendors and grabbed consumer activity at the point of purchase to put into the News Feed and post to a user’s profile? If you rented a video, bought a cool product, or dropped something in your wish list, your buddies could get a heads-up and they might ask you about it. The person being asked feels like an expert, the person with the question gets a frank opinion, and the vendor providing the data just might get another sale. It looked like a home run. This effort, named Beacon, was announced in November 2007. Some forty e-commerce sites signed up, including Blockbuster, Fandango, eBay, Travelocity, Zappos, and the New York Times. Zuckerberg was so confident of the effort that he stood before a group of Madison Avenue ad executives and declared that Beacon would represent a “once-in-a-hundred-years” fundamental change in the way media works. Like News Feeds, user reaction was swift and brutal. The commercial activity of Facebook users began showing up without their consent. The biggest problem with Beacon was that it was “opt-out” instead of “opt-in.” Facebook (and its partners) assumed users would agree to sharing data in their feeds. A pop-up box did appear briefly on most sites supporting Beacon, but it disappeared after a few seconds (Nakashima, 2007). Many users, blind to these sorts of alerts, either clicked through or ignored the warnings. And well…there are some purchases you might not want to broadcast to the world. “Facebook Ruins Christmas for Everyone!” screamed one headline from MSNBC.com. Another from U.S. News and World Report read “How Facebook Stole Christmas.” The Washington Post ran the story of Sean Lane, a twenty-eight-year-old tech support worker from Waltham, Massachusetts, who got a message from his wife just two hours after he bought a ring on Overstock.com. “Who is this ring for?” she wanted to know. Facebook had not only posted a feed that her husband had bought the ring, but also that he got it for a 51 percent discount! Overstock quickly announced that it was halting participation in Beacon until Facebook changed its practice to opt in (Nakashima, 2007). MoveOn.org started a Facebook group and online petition protesting Beacon. The Center for Digital Democracy and the U.S. Public Interest Research Group asked the Federal Trade Commission to investigate Facebook’s advertising programs. And a Dallas woman sued Blockbuster for violating the Video Privacy Protection Act (a 1998 U.S. law prohibiting unauthorized access to video store rental records). To Facebook’s credit, the firm acted swiftly. Beacon was switched to an opt-in system, where user consent must be given before partner data is sent to the feed. Zuckerberg would later say regarding Beacon: “We’ve made a lot of mistakes building this feature, but we’ve made even more with how we’ve handled them. We simply did a bad job with this release, and I apologize for it” (McCarthy, 2007). Beacon was eventually shut down and \$9.5 million was donated to various privacy groups as part of its legal settlement (Brodkin, 2009). Despite the Beacon fiasco, new users continued to flock to the site, and loyal users stuck with Zuck. Perhaps a bigger problem was that many of those forty A-list e-commerce sites that took a gamble with Facebook now had their names associated with a privacy screw-up that made headlines worldwide. A manager so burned isn’t likely to sign up first for the next round of experimentation. From the Prada example in Chapter 3 “Zara: Fast Fashion from Savvy Systems” we learned that savvy managers look beyond technology and consider complete information systems—not just the hardware and software of technology but also the interactions among the data, people, and procedures that make up (and are impacted by) information systems. Beacon’s failure is a cautionary tale of what can go wrong if users fail to broadly consider the impact and implications of an information system on all those it can touch. Technology’s reach is often farther, wider, and more significantly impactful than we originally expect. Reputation Damage and Increased Scrutiny—The Facebook TOS Debacle Facebook also suffered damage to its reputation, brand, and credibility, further reinforcing perceptions that the company acts brazenly, without considering user needs, and is fast and loose on privacy and user notification. Facebook worked through the feeds outrage, eventually convincing users of the benefits of feeds. But Beacon was a fiasco. And now users, the media, and watchdogs were on the alert. When the firm modified its terms of service (TOS) policy in Spring 2009, the uproar was immediate. As a cover story in New York magazine summed it up, Facebook’s new TOS appeared to state, “We can do anything we want with your content, forever,” even if a user deletes their account and leaves the service (Grigoriadis, 2009). Yet another privacy backlash! Activists organized, the press crafted juicy, attention-grabbing headlines, and the firm was forced once again to backtrack. But here’s where others can learn from Facebook’s missteps and response. The firm was contrite and reached out to explain and engage users. The old TOS were reinstated, and the firm posted a proposed new version that gave the firm broad latitude in leveraging user content without claiming ownership. And the firm renounced the right to use this content if a user closed their Facebook account. This new TOS was offered in a way that solicited user comments, and it was submitted to a community vote, considered binding if 30 percent of Facebook users participated. Zuckerberg’s move appeared to have turned Facebook into a democracy and helped empower users to determine the firm’s next step. Despite the uproar, only about 1 percent of Facebook users eventually voted on the measure, but the 74 percent to 26 percent ruling in favor of the change gave Facebook some cover to move forward (Smith, 2009). This event also demonstrates that a tempest can be generated by a relatively small number of passionate users. Firms ignore the vocal and influential at their own peril! In Facebook’s defense, the broad TOS was probably more a form of legal protection than any nefarious attempt to exploit all user posts ad infinitum. The U.S. legal environment does require that explicit terms be defined and communicated to users, even if these are tough for laypeople to understand. But a “trust us” attitude toward user data doesn’t work, particularly for a firm considered to have committed ham-handed gaffes in the past. Managers must learn from the freewheeling Facebook community. In the era of social media, your actions are now subject to immediate and sustained review. Violate the public trust and expect the equivalent of a high-powered investigative microscope examining your every move, and a very public airing of the findings. Key Takeaways • Word of mouth is the most powerful method for promoting products and services, and Beacon was conceived as a giant word-of-mouth machine with win-win benefits for firms, recommenders, recommendation recipients, and Facebook. • Beacon failed because it was an opt-out system that was not thoroughly tested beforehand, and because user behavior, expectations, and system procedures were not completely taken into account. • Partners associated with the rapidly rolled out, poorly conceived, and untested effort were embarrassed. Several faced legal action. • Facebook also reinforced negative perceptions regarding the firm’s attitudes toward users, notifications, and their privacy. This attitude only served to focus a continued spotlight on the firm’s efforts, and users became even less forgiving. • Activists and the media were merciless in criticizing the firm’s Terms of Service changes. Facebook’s democratizing efforts demonstrate lessons other organizations can learn from, regarding user scrutiny, public reaction, and stakeholder engagement. Questions and Exercises 1. What is Beacon? Why was it initially thought to be a good idea? What were the benefits to firm partners, recommenders, recommendation recipients, and Facebook? Who were Beacon’s partners and what did they seek to gain through the effort? 2. Describe “the biggest problem with Beacon?” Would you use Beacon? Why or why not? 3. How might Facebook and its partners have avoided the problems with Beacon? Could the effort be restructured while still delivering on its initial promise? Why or why not? 4. Beacon shows the risk in being a pioneer—are there risks in being too cautious and not pioneering with innovative, ground-floor marketing efforts? What kinds of benefits might a firm miss out on? Is there a disadvantage in being late to the party with these efforts, as well? Why or why not? 5. Why do you think Facebook changed its Terms of Service? Did these changes concern you? Were users right to rebel? What could Facebook have done to avoid the problem? Did Facebook do a good job in follow-up? How would you advise Facebook to apply lessons learned form the TOS controversy?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/08%3A_Facebook-_Building_a_Business_from_the_Social_Graph/8.07%3A_Privacy_Peril-_Beacon_and_the_TOS_Debacle.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand the extent and scope of the predator problem on online social networks. 2. Recognize the steps firms are taking to proactively engage and limit these problems. While spoiling Christmas is bad, sexual predators are far worse, and in October 2007, Facebook became an investigation target. Officials from the New York State Attorney General’s office had posed as teenagers on Facebook and received sexual advances. Complaints to the service from investigators posing as parents were also not immediately addressed. These were troubling developments for a firm that prided itself on trust and authenticity. In a 2008 agreement with forty-nine states, Facebook offered aggressive programs, many of which put it in line with MySpace. MySpace had become known as a lair for predators, and after months of highly publicized tragic incidents, the firm had become very aggressive about protecting minors. To get a sense of the scope of the problem, consider that MySpace claimed that it had found and deleted some twenty-nine thousand accounts from its site after comparing profiles against a database of convicted sex offenders1. Following MySpace’s lead, Facebook agreed to respond to complaints about inappropriate content within twenty-four hours and to allow an independent examiner to monitor how it handles complaints. The firm imposed age-locking restrictions on profiles, reviewing any attempt by someone under the age of eighteen to change their date of birth. Profiles of minors were no longer searchable. The site agreed to automatically send a warning message when a child is at risk of revealing personal information to an unknown adult. And links to explicit material, the most offensive Facebook groups, and any material related to cyberbullying were banned. Busted on Facebook Chapter 7 “Peer Production, Social Media, and Web 2.0” warned that your digital life will linger forever, and that employers are increasingly plumbing the depths of virtual communities in order to get a sense of job candidates. And it’s not just employers. Sleuths at universities and police departments have begun looking to Facebook for evidence of malfeasance. Oxford University fined graduating students more than £10,000 for their post-exam celebrations, evidence of which was picked up from Facebook. Police throughout the United States have made underage drinking busts and issued graffiti warnings based on Facebook photos, too. Beware—the Web knows! Key Takeaways • Thousands of sex offenders and other miscreants have been discovered on MySpace, Facebook, and other social networks. They are a legitimate risk to the community and they harm otherwise valuable services. • A combination of firm policies, computerized and human monitoring, aggressive reporting and follow-up, and engagement with authorities can reduce online predator risks. • Firms that fail to fully engage this threat put users and communities at risk and may experience irreparable damage to firms and reputations. Questions and Exercises 1. How big was the predator problem on MySpace? What efforts have social networks employed to cut down on the number of predators online? 2. Investigate the current policies regarding underage users on Facebook. Do you think the firm adequately protects its users? Why or why not? 3. What age is appropriate for users to begin using social networks? Which services are appropriate at which ages? Are there social networks targeted at very young children? Do you think that these are safe places? Why or why not? 1“Facebook Targets China, World’s Biggest Web Market,” Reuters, June 20, 2008.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/08%3A_Facebook-_Building_a_Business_from_the_Social_Graph/8.08%3A_Predators_and_Privacy.txt
Learning Objectives After studying this section you should be able to do the following: 1. Describe Facebook’s efforts to integrate its service with other Web sites and the potential strategic benefit for Facebook and its partners. 2. List and discuss the potential benefits and risks of engaging in the kinds of intersite sharing and collaboration efforts described in this section. In spring 2010, the world got a sense of the breadth and depth of Mark Zuckerberg’s vision. During the firm’s annual f8 Developers Conference, Facebook launched a series of initiatives that placed the company directly at the center of identity, sharing, and personalization—not just on Facebook but also across the Web. With just a few lines of HTML code, any developer could add a Facebook “Like” button to their site and take advantage of the social network’s power of viral distribution. A user clicking that page’s “Like” button automatically would then send a link to that page to their news feed, where it has the potential to be seen by all of their friends. No additional sign-in is necessary as long as you logged into Facebook first (reinforcing Facebook’s importance as the first stop in your Internet surfing itinerary). While some sites renamed “Like” to “Recommend” (after all, do you really want to “like” a story about a disaster or tragedy?), the effort was adopted with stunning speed. Facebook’s “Like” button served up more than one billion times across the Web in the first twenty-four hours, and over fifty thousand Web sites signed up to add the “Like” button to their content within the first week (Oreskovic, 2010). Facebook also offered a system where Web site operators can choose to accept a user’s Facebook credentials for logging in. Users like this because they can access content without the hurdle of creating a new account. Web sites like it because with the burden of signing up out of the way, Facebook becomes an experimentation lubricant: “Oh, I can use my Facebook ID to sign in? Then let me try this out.” Facebook also lets Web sites embed some Facebook functionality right on their pages. A single line of code added to any page creates a “social toolbar” that shows which of your friends are logged into Facebook, and allows access to Facebook Chat without leaving that site. Site operators who are keen on making it easy for friends to summon friends to their pages can now sprinkle these little bits of Facebook across the Web. Other efforts allow firms to leverage Facebook data to make their sites more personalized. Firms around the Web can now show if a visitor’s friends have “Liked” items on the site, posted comments, or performed other actions. Using this feature, Facebook users logging into Yelp can see a list of restaurants recommended by trusted friends instead of just the reviews posted by a bunch of strangers. Users of the music-streaming site Pandora can have the service customized based on music tastes pulled from their Facebook profile page. They can share stations with friends and have data flow back to update the music preferences listed in their Facebook profile pages. Visit CNN and the site can pull together a list of stories recommended by friends (Valentino-DeVries, 2010). Think about how this strengthens the social graph. While items in the news feed might quickly scroll away and disappear, that data can now be pulled up within a Web site, providing insight from friends when and where you’re likely to want it most. Taken together, these features enlist Web sites to serve as vassal states in the Facebook empire. Each of these ties makes Facebook membership more valuable by enhancing network effects, strengthening switching costs, and creating larger sets of highly personalized data to leverage. Facebook: The Bank of the Web? Those with an eye for business disruption are watching the evolution of Facebook Credits. Credits can be used to pay for items, such as weapons in video games or virtual gifts. Facebook shares credits revenue with application developers but takes 30 percent off the top for acting as banker and transaction clearing house. There are real bucks to be made from digital make-believe. Analysts estimate that in 2009, virtual goods racked up \$1 billion in U.S. transactions and \$5 billion worldwide (Womack & Valerio, 2010; Miller & Stone, 2009). Facebook currently isn’t much of a player in virtual goods, but that may change. Many expect Credits use to grow into a thriving standard. Users are far more likely to trust Facebook with their credit card than a little-known app developer. There are also an increasing number of ways to pay for Credits. Facebook’s App2Credits effort lets firms offer Credits in ways that don’t involve a credit card, including getting Credits as part of a card loyalty program, converting unwanted real-world gift cards into Facebook Credits, or earning Credits for shopping or performing other online tasks (Kincaid, 2010). Credits were rolled out supporting fifteen international currencies and multiple credit cards. Transaction support is provided through a partnership with PayPal, and a deal with mobile payments start-up Zong allows users to bill credits to their phone (McCarthy, 2010). All this banking activity leaves some wondering if Facebook might not have grander ambitions. The Financial Times has referred to Facebook as being on the path to becoming “The Bank of the Web” (Nuttall, 2010). Could Facebook morph into an actual real-currency bank? A site that knows how to reach your friends might offer an easy way to, say, settle a dinner tab or hound buddies for their Final Four pool money. This might also be a solid base for even deeper banking links between users and all those firms Facebook has begun to leverage in deeper data-sharing partnerships. This may be something to think about, or perhaps, to bank on! More Privacy Controversy The decision to launch these new features as “opt-out” instead of “opt-in” immediately drew the concern of lawmakers. Given the Beacon debacle, the TOS controversy, and Google’s problems with Buzz (see Chapter 14 “Google: Search, Online Advertising, and Beyond”), you’d think Facebook would have known better. But within a week of Beacon’s launch, four U.S. senators contacted the firm, asking why it was so difficult to opt out of the information-sharing platform (Lardinois, 2010). Amid a crush of negative publicity, the firm was forced to quickly roll out simplified privacy management controls. Facebook’s struggles show the tension faced by any firm that wants to collect data to improve the user experience (and hopefully make money along the way). Opt-out guarantees the largest possible audience and that’s key to realizing the benefits of network effects, data, and scale. Making efforts opt-in creates the very real risk that not enough users will sign up and that the reach and impact of these kinds of initiatives will be limited (Lardinois, 2010). Fast Company calls this the paradox of privacy, saying, “We want some semblance of control over our personal data, even if we likely can’t be bothered to manage it” (Manjoo, 2010). Evidence suggests that most people are accepting some degree of data sharing as long as they know that they can easily turn it off if they want to. For example, when Google rolled out ads that tracked users across the network of Web sites running Google ads, the service also provided a link in each ad where users could visit an “ad preferences manager” to learn how they were being profiled, to change settings, and to opt out (see Chapter 14 “Google: Search, Online Advertising, and Beyond”). It turns out only one in fifteen visitors to the ad preferences manager ended up opting out completely (Manjoo, 2010). Managers seeking to leverage data should learn from the examples of Facebook and Google and be certain to offer clear controls that empower user choice. Free Riders and Security Concerns Facebook also allows third-party developers to create all sorts of apps to access Facebook data. Facebook feeds are now streaming through devices that include Samsung, Vizio, and Sony televisions; Xbox 360 and Wii game consoles; Verizon’s FiOS pay television service; and the Amazon Kindle. While Facebook might never have the time or resources to create apps that put its service on every gadget on the market, they don’t need to. Developers using Facebook’s access tools will gladly pick up the slack. But there are major challenges with a more open approach, most notably a weakening of strategic assets, revenue sharing, and security. First, let’s discuss weakened assets. Mark Zuckerberg’s geeks have worked hard to make their site the top choice for most of the world’s social networkers and social network application developers. Right now, everyone goes to Facebook because everyone else is on Facebook. But as Facebook opens up access to users and content, it risks supporting efforts that undermine the firm’s two most compelling sources of competitive advantage: network effects and switching costs. Any effort that makes it easier to pack up your “social self” and move it elsewhere risks undermining vital competitive resources advantages (it still remains more difficult to export contacts, e-mails, photos, and video from Facebook than it does from sites supporting OpenSocial, a rival platform backed by Google and supported by many of Facebook’s competitors) (Vogelstein, 2009). This situation also puts more pressure on Facebook to behave. Lower those switching costs at a time when users are disgusted with firm behavior, and it’s not inconceivable that a sizable chunk of the population could bolt for a new rival (to Facebook’s credit, the site also reached out to prior critics like MoveOn.org, showing Facebook’s data-sharing features and soliciting input months before their official release). Along with asset weakening comes the issue of revenue sharing. As mentioned earlier, hosting content (especially photos and rich media) is a very expensive proposition. What incentive does a site have to store data if it will just be sent to a third-party site that will run ads around this content and not share the take? Too much data portability presents a free rider problem where firms mooch off Facebook’s infrastructure without offering much in return. Consider services like TweetDeck. The free application allows users to access their Facebook feeds and post status updates—alongside Twitter updates and more—all from one interface. Cool for the user, but bad for Facebook, since each TweetDeck use means Facebook users are “off-site,” not looking at ads, and hence not helping Zuckerberg & Co. earn revenue. It’s as if the site has encouraged the equivalent of an ad blocker, yet Facebook’s openness lets this happen! Finally, consider security. Allowing data streams that contain potentially private posts and photographs to squirt across the Internet and land where you want them raises all sorts of concerns. What’s to say an errant line of code doesn’t provide a back door to your address book or friends list? To your messaging account? To let others see photos you’d hoped to only share with family? Security breaches can occur on any site, but once the data is allowed to flow freely, every site with access is, for hackers, the equivalent of a potential door to open or a window to crawl through. Social Networking Goes Global Facebook will eventually see stellar growth start to slow as the law of large numbers sets in. The shift from growth business to mature one can be painful, and for online firms it can occur relatively quickly. That doesn’t mean these firms will become unprofitable, but to sustain growth (particularly important for keeping up the stock price of a publicly traded company), firms often look to expand abroad. Facebook’s crowdsourcinglocalization effort, where users were asked to look at Facebook phrases and offer translation suggestions for their local language (see Chapter 7 “Peer Production, Social Media, and Web 2.0”), helped the firm rapidly deploy versions in dozens of markets, blasting the firm past MySpace in global reach. But network effects are both quick and powerful, and late market entry can doom a business reliant on the positive feedback loop of a growing user base. And global competition is out there. Worldwide, Facebook wannabes include “Studiverzeichnis” (German for “student index”); Vkontakte (“in contact”), Russia’s most popular social networking site; and Renren (formerly Xiaonei), which is said to have registered 90 percent of China’s college students. China is proving a particularly difficult market for foreign Internet firms. Google, eBay, Yahoo! and MySpace have all struggled there (at one point, Rupert Murdoch even sent his wife, Wendi Deng Murdoch, to head up the MySpace China effort). And don’t be surprised to see some of these well-capitalized overseas innovators making a move on U.S. markets too. While global growth can seem like a good thing, acquiring global users isn’t the same as making money from them. Free sites with large amounts of users from developing nations face real cost/revenue challenges. As the New York Times points out, there are 1.6 billion Internet users worldwide, but fewer than half of them have disposable incomes high enough to interest major advertisers (Stone & Helft, 2009). Worse still, telecommunications costs in these markets are also often higher, too. Bandwidth costs and dim revenue options caused video site Veoh to block access coming from Africa, Eastern Europe, Latin America, and some parts of Asia. MySpace already offers a stripped-down Lite option as its default in India. And execs at YouTube and Facebook haven’t ruled out lowering the quality of streaming media, file size, or other options, discriminating by region or even by user. Making money in the face of this so-called “International Paradox” requires an awareness of “fast and cheap” tech trends highlighted in Chapter 5 “Moore’s Law: Fast, Cheap Computing and What It Means for the Manager”, as well as an ability to make accurate predictions regarding regional macroeconomic trends. Ignore a market that’s unprofitable today and a rival could swoop in and establish network effects and other assets that are unbeatable tomorrow. But move too early and losses could drag you down. Key Takeaways • Facebook has extended its reach by allowing other Web sites to leverage the site. Facebook partners can add the “Like” button to encourage viral sharing of content, leverage Facebook user IDs for log-in, and tap a user’s friend and feed data to personalize and customize a user’s experience. • These efforts come with risks, including enabling free riders that might exploit the firm’s content without compensation, and the potential for privacy and security risks. • Facebook Credits are a currency for use for virtual gifts and games. The service accepts multiple currencies and payment methods; and while virtual goods have the potential to be a big business, some speculate that Facebook may one day be able to develop a payments and banking businesses from this base. • Global growth is highly appealing to firms, but expensive bandwidth costs and low prospects for ad revenue create challenges akin to the free rider problem. Questions and Exercises 1. Cite effective examples you’ve seen of Facebook features on other Web sites (or if you haven’t seen any, do some background research to uncover such efforts). Why do the efforts you’ve highlighted “work”? How do they benefit various parties? Does everyone benefit? Is anyone at risk? If so, explain the risks. 2. Should Facebook be as open as it is? In what ways might this benefit the firm? In what ways is it a risk? 3. How can Facebook limit criticism of its data-sharing features? Do you think it made mistakes during rollout? 4. What is TweetDeck? Why is a product like this a potential threat to Facebook? 5. Research OpenSocial online. What is this effort? What challenges does it face in attempting to become a dominant standard? 6. Facebook has global competitors. What determines the success of a social network within a given country? Why do network effects for social networks often fail to translate across national borders? 7. How did Facebook localize its site so quickly for various different regions of the world? 8. What factors encourage firms to grow an international user base as quickly as possible? Why is this a risk? What sorts of firms are at more risk than others?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/08%3A_Facebook-_Building_a_Business_from_the_Social_Graph/8.09%3A_One_Graph_to_Rule_Them_All-_Facebook_Takes_Over_the_Web.txt
Learning Objectives After studying this section you should be able to do the following: 1. Question the \$15 billion valuation so often cited by the media. 2. Understand why Microsoft might be willing to offer to invest in Facebook at a higher valuation rate. It has often been said that the first phase of the Internet was about putting information online and giving people a way to find it. The second phase of the Web is about connecting people with one another. The Web 2.0 movement is big and impactful, but is there much money in it? While the valuations of private firms are notoriously difficult to pin down due to a lack of financial disclosure, the often-cited \$15 billion valuation from the fall of 2007 Microsoft investment was rich, even when made by such a deep-pocketed firm. Using estimates at the time of the deal, if Facebook were a publicly traded company, it would have a price-to-earnings ratio of five hundred; Google’s at the time was fifty-three, and the average for the S&P 500 is historically around fifteen. But the math behind the deal is a bit more complex than was portrayed in most press reports. The deal was also done in conjunction with an agreement that for a time let Microsoft manage the sale of Facebook’s banner ads worldwide. And Microsoft’s investment was done on the basis of preferred stock, granting the firm benefits beyond common stock, such as preference in terms of asset liquidation (Stone, 2008). Both of these are reasons a firm would be willing to “pay more” to get in on a deal. Another argument can be made for Microsoft purposely inflating the value of Facebook in order to discourage rival bidders. A fat valuation by Microsoft and a deal locking up ad rights makes the firm seem more expensive, less attractive, and out of reach for all but the richest and most committed suitors. Google may be the only firm that could possibly launch a credible bid, and Zuckerberg is reported to be genuinely uninterested in being absorbed by the search sovereign (Vogelstein, 2009). Since the fall of 2007, several others have invested private money into Facebook as well, including the Founders Fund and Li Ka-shing, the Hong Kong billionaire behind Hutchison Whampoa. Press reports and court documents suggest that these deals were done at valuations that were lower than what Microsoft accepted. In May 2009 Russian firm Digital Sky paid \$200 million for 1.96 percent of the firm, a ten-billion-dollar valuation (also in preferred stock). That’s a one-third haircut off the Microsoft price, albeit without the Redmond-specific strategic benefits of the investment (Kirkpatrick, 2008; Ante, 2008). And as the chart in Figure 8.2 “Revenue per User (2009)” shows, Facebook still lags well behind many of its rivals in terms of revenue per user. Figure 8.2 Revenue per User (2009) While Facebook’s reach has grown to over half a billion visitors a month, its user base generates far less cash on a per-person basis than many rivals do (Blodget, 2010). So despite the headlines, even at the time of the Microsoft investment, Facebook was almost certainly not valued at a pure \$15 billion. This isn’t to say definitively that Facebook won’t be worth \$15 billion (or more) someday, but even a valuation at “just” \$10 billion is a lot to pay for a then-profitless firm with estimated 2009 revenues of \$500 million. Of course, raising more capital enables Zuckerberg to go on the hunt as well. Facebook investor Peter Theil confirmed the firm had already made an offer to buy Twitter (a firm which at the time had zero dollars in revenues and no discernible business model) for a cool half billion dollars (Ante, 2009). Much remains to be demonstrated for any valuation to hold. Facebook is new. Its models are evolving, and it has quite a bit to prove. Consider efforts to try to leverage friend networks. According to Facebook’s own research, “an average Facebook user with 500 friends actively follows the news on only forty of them, communicates with twenty, and keeps in close touch with about ten. Those with smaller networks follow even fewer” (Baker, 2009). That might not be enough critical mass to offer real, differentiable impact, and that may have been part of the motivation behind Facebook’s mishandled attempts to encourage more public data sharing. The advantages of leveraging the friend network hinge on increased sharing and trust, a challenge for a firm that has had so many high-profile privacy stumbles. There is promise. Profiling firm Rapleaf found that targeting based on actions within a friend network can increase click-through rates threefold—that’s an advantage advertisers are willing to pay for. But Facebook is still far from proving it can consistently achieve the promise of delivering valuable ad targeting. Steve Rubel wrote the following on his Micro Persuasion blog: “The Internet amber is littered with fossilized communities that once dominated. These former stalwarts include AOL, Angelfire, theGlobe.com, GeoCities, and Tripod.” Network effects and switching cost advantages can be strong, but not necessarily insurmountable if value is seen elsewhere and if an effort becomes more fad than “must have.” Time will tell if Facebook’s competitive assets and constant innovation are enough to help it avoid the fate of those that have gone before them. Key Takeaways • Not all investments are created equal, and a simple calculation of investment dollars multiplied by the percentage of firm owned does not tell the whole story. • Microsoft’s investment entitled the firm to preferred shares; it also came with advertising deal exclusivity. • Microsoft may also benefit from offering higher valuations that discourage rivals from making acquisition bids for Facebook. • Facebook has continued to invest capital raised in expansion, particularly in hardware and infrastructure. It has also pursued its own acquisitions, including a failed bid to acquire Twitter. • The firm’s success will hinge on its ability to create sustainably profitable revenue opportunities. It has yet to prove that data from the friend network will be large enough and can be used in a way that is differentiably attractive to advertisers. However, some experiments in profiling and ad targeting across a friend network have shown very promising results. Firms exploiting these opportunities will need to have a deft hand in offering consumer and firm value while quelling privacy concerns. Questions and Exercises 1. Circumstances change over time. Research the current state of Facebook’s financials—how much is the firm “valued at”? How much revenue does it bring in? How profitable is it? Are these figures easy or difficult to find? Why or why not? 2. Who else might want to acquire Facebook? Is it worth it at current valuation rates? 3. What motivation does Microsoft have in bidding so much for Facebook? 4. Do you think Facebook was wise to take funds from Digital Sky? Why or why not? 5. Do you think Facebook’s friend network is large enough to be leveraged as a source of revenue in ways that are notably different than conventional pay-per-click or CPM-based advertising? Would you be excited about certain possibilities? Creeped out by some? Explain possible scenarios that might work or might fail. Justify your interpretation of these scenarios. 6. So you’ve had a chance to learn about Facebook, its model, growth, outlook, strategic assets, and competitive environment. How much do you think the firm is worth? Which firms do you think it should compare with in terms of value, influence, and impact? Would you invest in Facebook? 7. Which firms might make good merger partners with Facebook? Would these deals ever go through? Why or why not?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/08%3A_Facebook-_Building_a_Business_from_the_Social_Graph/8.10%3A_Is_Facebook_Worth_It.txt
Learning Objectives After studying this section you should be able to do the following: 1. Recognize the importance of software and its implications for the firm and strategic decision making. 2. Understand that software is everywhere; not just in computers, but also cell phones, cars, cameras, and many other technologies. 3. Know what software is and be able to differentiate it from hardware. 4. List the major classifications of software and give examples of each. We know computing hardware is getting faster and cheaper, creating all sorts of exciting and disruptive opportunities for the savvy manager. But what’s really going on inside the box? It’s software that makes the magic of computing happen. Without software, your PC would be a heap of silicon wrapped in wires encased in plastic and metal. But it’s the instructions—the software code—that enable a computer to do something wonderful, driving the limitless possibilities of information technology. Software is everywhere. An inexpensive cell phone has about one million lines of code, while the average car contains nearly one hundred million (Charette, 2005). In this chapter we’ll take a peek inside the chips to understand what software is. A lot of terms are associated with software: operating systems, applications, enterprise software, distributed systems, and more. We’ll define these terms up front, and put them in a managerial context. A follow-up chapter, Chapter 10 “Software in Flux: Partly Cloudy and Sometimes Free”, will focus on changes impacting the software business, including open source software, software as a service (SaaS), and cloud computing. These changes are creating an environment radically different from the software industry that existed in prior decades—confronting managers with a whole new set of opportunities and challenges. Managers who understand software can better understand the possibilities and impact of technology. They can make better decisions regarding the strategic value of IT and the potential for technology-driven savings. They can appreciate the challenges, costs, security vulnerabilities, legal and compliance issues, and limitations involved in developing and deploying technology solutions. In the next two chapters we will closely examine the software industry and discuss trends, developments and economics—all of which influence decisions managers make about products to select, firms to partner with, and firms to invest in. • What Is Software? When we refer to computer hardware (sometimes just hardware), we’re talking about the physical components of information technology—the equipment that you can physically touch, including computers, storage devices, networking equipment, and other peripherals. Software refers to a computer program or collection of programs—sets of instructions that tell the hardware what to do. Software gets your computer to behave like a Web browser or word processor, makes your iPod play music and video, and enables your bank’s ATM to spit out cash. It’s when we start to talk about the categories of software that most people’s eyes glaze over. To most folks, software is a big, incomprehensible alphabet soup of acronyms and geeky phrases: OS, VB, SAP, SQL, to name just a few. Don’t be intimidated. The basics are actually pretty easy to understand. But it’s not soup; it’s more of a layer cake. Think about computer hardware as being at the bottom of the layer cake. The next layer is the operating system, the collection of programs that control the hardware. Windows, Mac OS X, and Linux are operating systems. On top of that layer are applications—these can range from end-user programs like those in Office, to the complex set of programs that manage a business’s inventory, payroll, and accounting. At the top of the cake are users. Figure 9.1 The Hardware/Software Layer Cake The flexibility of these layers gives computers the customization options that managers and businesses demand. Understanding how the layers relate to each other helps you make better decisions on what options are important to your unique business needs, can influence what you buy, and may have implications for everything from competitiveness to cost overruns to security breaches. What follows is a manager’s guide to the main software categories with an emphasis on why each is important. Key Takeaways • Software refers to a computer program or collection of programs. It enables computing devices to perform tasks. • You can think of software as being part of a layer cake, with hardware at the bottom; the operating system controlling the hardware and establishing standards, the applications executing one layer up, and the users at the top. • How these layers relate to one another has managerial implications in many areas, including the flexibility in meeting business demand, costs, legal issues and security. • Software is everywhere—not just in computers, but also in cell phones, cars, cameras, and many other technologies. Questions and Exercises 1. Explain the difference between hardware and software. 2. Why should a manager care about software and how software works? What critical organizational and competitive factors can software influence? 3. What role has software played in your decision to select certain products? Has this influenced why you favored one product or service over another? 4. Find the Fortune 500 list online. Which firm is the highest ranked software firm? While the Fortune 500 ranks firms according to revenue, what’s this firm’s profitability rank? What does this discrepancy tell you about the economics of software development? Why is the software business so attractive to entrepreneurs? 5. Refer to earlier chapters (and particularly to Chapter 2 “Strategy and Technology: Concepts and Frameworks for Understanding What Separates Winners from Losers”): Which resources for competitive advantage might top software firms be able to leverage to ensure their continued dominance? Give examples of firms that have leveraged these assets, and why they are so strong.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/09%3A_Understanding_Software-_A_Primer_for_Managers/9.01%3A_Introduction.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand what an operating system is and why computing devices require operating systems. 2. Appreciate how embedded systems extend Moore’s Law, allowing firms to create “smarter” products and services Computing hardware needs to be controlled, and that’s the role of the operating system. The operating system (sometimes called the “OS”) provides a common set of controls for managing computer hardware, making it easier for users to interact with computers and for programmers to write application software. Just about every computing device has an operating system—desktops and laptops, enterprise-class server computers, your mobile phone. Even specialty devices like iPods, video game consoles, and television set top boxes run some form of OS. Some firms, like Apple and Nintendo, develop their own proprietary OS for their own hardware. Microsoft sells operating systems to everyone from Dell to the ATM manufacturer Diebold (listen for the familiar Windows error beep on some cash machines). And there are a host of specialty firms, such as Wind River (purchased by Intel), that help firms develop operating systems for all sorts of devices that don’t necessarily look like a PC, including cars, video editing systems, and fighter jet control panels. Anyone who has used both a PC and a Mac and has noticed differences across these platforms can get a sense of the breadth of what an operating system does. Even for programs that are otherwise identical for these two systems (like the Firefox browser), subtitle differences are visible. Screen elements like menus, scroll bars, and window borders look different on the Mac than they do in Windows. So do the dialogue boxes that show up when you print or save. These items look and behave differently because each of these functions touches the hardware, and the team that developed Microsoft Windows created a system distinctly different from their Macintosh counterparts at Apple. Graphical user interface (UI) items like scroll bars and menus are displayed on the hardware of the computer display. Files are saved to the hardware of a hard drive or other storage device. Most operating systems also include control panels, desktop file management, and other support programs to work directly with hardware elements like storage devices, displays, printers, and networking equipment. The Macintosh Finder and the Windows Explorer are examples of components of these operating systems. The consistent look, feel, and functionality that operating systems enforce across various programs help make it easier for users to learn new software, which reduces training costs and operator error. See Figure 9.2 for similarities and differences. Figure 9.2 Differences between the Windows and Mac operating systems are evident throughout the user interface, particularly when a program interacts with hardware. Operating systems are also designed to give programmers a common set of commands to consistently interact with the hardware. These commands make a programmer’s job easier by reducing program complexity and making it faster to write software while minimizing the possibility of errors in code. Consider what an OS does for the Wii game developer. Nintendo’s Wii OS provides Wii programmers with a set of common standards to use to access the Wiimote, play sounds, draw graphics, save files, and more. Without this, games would be a lot more difficult to write, they’d likely look differently, be less reliable, would cost more, and there would be fewer titles available. Similarly, when Apple provided developers with a common set of robust, easy-to-use standards for the iPhone and (via the App Store) an easy way for users to install these applications on top of the iPhone/iPod touch OS, software development boomed, and Apple became hands-down the most versatile mobile computing device available1. In Apple’s case, some fifty thousand apps became available through the App Store in less than a year. A good OS and software development platform can catalyze network effects (see Chapter 6 “Understanding Network Effects”). While the OS seems geeky, its effective design has very strategic business implications! Figure 9.3 Operating System Market Share for Desktop, Server, and Mobile Phones Data provided by HitsLink Market Share, Forrester Research, IDC, and AdMob2.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/09%3A_Understanding_Software-_A_Primer_for_Managers/9.02%3A_Operating_Systems.txt
Learning Objectives After studying this section you should be able to do the following: 1. Appreciate the difference between desktop and enterprise software. 2. List the categories of enterprise software. 3. Understand what an ERP (enterprise resource planning) software package is. 4. Recognize the relationship of the DBMS (database system) to the other enterprise software systems. 5. Recognize both the risks and rewards of installing packaged enterprise systems. Operating systems are designed to create a platform so that programmers can write additional applications, allowing the computer to do even more useful things. While operating systems control the hardware, application software (sometimes referred to as software applications, applications, or even just apps) perform the work that users and firms are directly interested in accomplishing. Think of applications as the place where the users or organization’s real work gets done. As we learned in Chapter 6 “Understanding Network Effects”, the more application software that is available for a platform (the more games for a video game console, the more apps for your phone), the more valuable it potentially becomes. Desktop software refers to applications installed on a personal computer—your browser, your Office suite (e.g., word processor, spreadsheet, presentation software), photo editors, and computer games are all desktop software. Enterprise software refers to applications that address the needs of multiple, simultaneous users in an organization or work group. Most companies run various forms of enterprise software programs to keep track of their inventory, record sales, manage payments to suppliers, cut employee paychecks, and handle other functions. Some firms write their own enterprise software from scratch, but this can be time consuming and costly. Since many firms have similar procedures for accounting, finance, inventory management, and human resource functions, it often makes sense to buy a software package (a software product offered commercially by a third party) to support some of these functions. So-called enterprise resource planning (ERP) software packages serve precisely this purpose. In the way that Microsoft can sell you a suite of desktop software programs that work together, many companies sell ERP software that coordinates and integrates many of the functions of a business. The leading ERP vendors include the firm’s SAP and Oracle, although there are many firms that sell ERP software. A company doesn’t have to install all of the modules of an ERP suite, but it might add functions over time—for example, to plug in an accounting program that is able to read data from the firm’s previously installed inventory management system. And although a bit more of a challenge to integrate, a firm can also mix and match components, linking software the firm has written with modules purchased from different enterprise software vendors. Figure 9.4 ERP in Action1 An ERP system with multiple modules installed can touch many functions of the business: • Sales—A sales rep from Vermont-based SnowboardCo. takes an order for five thousand boards from a French sporting goods chain. The system can verify credit history, apply discounts, calculate price (in euros), and print the order in French. • Inventory—While the sales rep is on the phone with his French customer, the system immediately checks product availability, signaling that one thousand boards are ready to be shipped from the firm’s Burlington warehouse, the other four thousand need to be manufactured and can be delivered in two weeks from the firm’s manufacturing facility in Guangzhou. • Manufacturing—When the customer confirms the order, the system notifies the Guangzhou factory to ramp up production for the model ordered. • Human Resources—High demand across this week’s orders triggers a notice to the Guangzhou hiring manager, notifying her that the firm’s products are a hit and that the flood of orders coming in globally mean her factory will have to hire five more workers to keep up. • Purchasing—The system keeps track of raw material inventories, too. New orders trigger an automatic order with SnowboardCo’s suppliers, so that raw materials are on hand to meet demand. • Order Tracking—The French customer can log in to track her SnowboardCo order. The system shows her other products that are available, using this as an opportunity to cross-sell additional products. • Decision Support—Management sees the firm’s European business is booming and plans a marketing blitz for the continent, targeting board models and styles that seem to sell better for the Alps crowd than in the U.S. market. Other categories of enterprise software that managers are likely to encounter include the following: • customer relationship management (CRM) systems used to support customer-related sales and marketing activities • supply chain management (SCM) systems that can help a firm manage aspects of its value chain, from the flow of raw materials into the firm through delivery of finished products and services at the point-of-consumption • business intelligence (BI) systems, which use data created by other systems to provide reporting and analysis for organizational decision making Major ERP vendors are now providing products that extend into these and other categories of enterprise application software, as well. Most enterprise software works in conjunction with a database management system (DBMS), sometimes referred to as a “database system.” The database system stores and retrieves the data that an application creates and uses. Think of this as another additional layer in our cake analogy. Although the DBMS is itself considered an application, it’s often useful to think of a firm’s database systems as sitting above the operating system, but under the enterprise applications. Many ERP systems and enterprise software programs are configured to share the same database system so that an organization’s different programs can use a common, shared set of data. This system can be hugely valuable for a company’s efficiency. For example, this could allow a separate set of programs that manage an inventory and point-of-sale system to update a single set of data that tells how many products a firm has to sell and how many it has already sold—information that would also be used by the firm’s accounting and finance systems to create reports showing the firm’s sales and profits. Firms that don’t have common database systems with consistent formats across their enterprise often struggle to efficiently manage their value chain. Common procedures and data formats created by packaged ERP systems and other categories of enterprise software also make it easier for firms to use software to coordinate programs between organizations. This coordination can lead to even more value chain efficiencies. Sell a product? Deduct it from your inventory. When inventory levels get too low, have your computer systems send a message to your supplier’s systems so that they can automatically build and ship replacement product to your firm. In many cases these messages are sent without any human interaction, reducing time and errors. And common database systems also facilitate the use of BI systems that provide critical operational and competitive knowledge and empower decision making. For more on CRM and BI systems, and the empowering role of data, see Chapter 11 “The Data Asset: Databases, Business Intelligence, and Competitive Advantage”. Figure 9.5 An organization’s database management system can be set up to work with several applications both within and outside the firm. The Rewards and Risks of Packaged Enterprise Systems When set up properly, enterprise systems can save millions of dollars and turbocharge organizations. For example, the CIO of office equipment maker Steelcase credited the firm’s ERP with an eighty-million-dollar reduction in operating expenses saved from eliminating redundant processes and making data more usable. The CIO of Colgate Palmolive also praised their ERP, saying, “The day we turned the switch on, we dropped two days out of our order-to-delivery cycle” (Robinson & Dilts, 1999). Packaged enterprise systems can streamline processes, make data more usable, and ease the linking of systems with software across the firm and with key business partners. Plus, the software that makes up these systems is often debugged, tested, and documented with an industrial rigor that may be difficult to match with proprietary software developed in-house. But for all the promise of packaged solutions for standard business functions, enterprise software installations have proven difficult. Standardizing business processes in software that others can buy means that those functions are easy for competitors to match, and the vision of a single monolithic system that delivers up wondrous efficiencies has been difficult for many to achieve. The average large company spends roughly \$15 million on ERP software, with some installations running into the hundreds of millions of dollars (Rettig, 2007). And many of these efforts have failed disastrously. FoxMeyer was once a six-billion-dollar drug distributor, but a failed ERP installation led to a series of losses that bankrupted the firm. The collapse was so rapid and so complete that just a year after launching the system, the carcass of what remained of the firm was sold to a rival for less than \$80 million. Hershey Foods blamed a \$466 million revenue shortfall on glitches in the firm’s ERP rollout. Among the problems, the botched implementation prevented the candy maker from getting product to stores during the critical period before Halloween. Nike’s first SCM and ERP implementation was labeled a “disaster”; their systems were blamed for over \$100 million in lost sales (Koch, 2004). Even tech firms aren’t immune to software implementation blunders. HP once blamed a \$160 million loss on problems with its ERP systems (Charette, 2005). Manager beware—there are no silver bullets. For insight on the causes of massive software failures, and methods to improve the likelihood of success, see Section 9.6 “Total Cost of Ownership (TCO): Tech Costs Go Way beyond the Price Tag”. Key Takeaways • Application software focuses on the work of a user or an organization. • Desktop applications are typically designed for a single user. Enterprise software supports multiple users in an organization or work group. • Popular categories of enterprise software include ERP (enterprise resource planning), SCM (supply chain management), CRM (customer relationship management), and BI (business intelligence) software, among many others. • These systems are used in conjunction with database management systems, programs that help firms organize, store, retrieve, and maintain data. • ERP and other packaged enterprise systems can be challenging and costly to implement, but can help firms create a standard set of procedures and data that can ultimately lower costs and streamline operations. • The more application software that is available for a platform, the more valuable that platform becomes. • The DBMS stores and retrieves the data used by the other enterprise applications. Different enterprise systems can be configured to share the same database system in order share common data. • Firms that don’t have common database systems with consistent formats across their enterprise often struggle to efficiently manage their value chain, and often lack the flexibility to introduce new ways of doing business. Firms with common database systems and standards often benefit from increased organizational insight and decision-making capabilities. • Enterprise systems can cost millions of dollars in software, hardware, development, and consulting fees, and many firms have failed when attempting large-scale enterprise system integration. Simply buying a system does not guarantee its effective deployment and use. • When set up properly, enterprise systems can save millions of dollars and turbocharge organizations by streamlining processes, making data more usable, and easing the linking of systems with software across the firm and with key business partners. Questions and Exercises 1. What is the difference between desktop and enterprise software? 2. Who are the two leading ERP vendors? 3. List the functions of a business that might be impacted by an ERP. 4. What do the acronyms ERP, CRM, SCM, and BI stand for? Briefly describe what each of these enterprise systems does. 5. Where in the “layer cake” analogy does the DBMS lie. 6. Name two companies that have realized multimillion-dollar benefits as result of installing enterprise systems. 7. Name two companies that have suffered multimillion-dollar disasters as result of failed enterprise system installations. 8. How much does the average large company spend annually on ERP software? 1Adapted from G. Edmondson, “Silicon Valley on the Rhine,” BusinessWeek International, November 3, 1997.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/09%3A_Understanding_Software-_A_Primer_for_Managers/9.03%3A_Application_Software.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand the concept of distributed computing and its benefits. 2. Understand the client-server model of distributed computing. 3. Know what Web services are and the benefits that Web services bring to firms. 4. Appreciate the importance of messaging standards and understand how sending messages between machines can speed processes, cut costs, reduce errors, and enable new ways of doing business. When computers in different locations can communicate with one another, this is often referred to as distributed computing. Distributed computing can yield enormous efficiencies in speed, error reduction, and cost savings and can create entirely new ways of doing business. Designing systems architecture for distributed systems involves many advanced technical topics. Rather than provide an exhaustive decomposition of distributed computing, the examples that follow are meant to help managers understand the bigger ideas behind some of the terms that they are likely to encounter. Let’s start with the term server. This is a tricky one because it’s frequently used in two ways: (1) in a hardware context a server is a computer that has been configured to support requests from other computers (e.g., Dell sells servers) and (2) in a software context a server is a program that fulfills requests (e.g., the Apache open source Web server). Most of the time, server software resides on server-class hardware, but you can also set up a PC, laptop, or other small computer to run server software, albeit less powerfully. And you can use mainframe or super-computer-class machines as servers, too. The World Wide Web, like many other distributed computing services, is what geeks call a client-server system. Client-server refers to two pieces of software, a client that makes a request, and a server that receives and attempts to fulfill the request. In our WWW scenario, the client is the browser (e.g., Internet Explorer, Firefox, Safari). When you type a Web site’s address into the location field of your browser, you’re telling the client to “go find the Web server software at the address provided, and tell the server to return the Web site requested.” It is possible to link simple scripting languages to a Web server for performing calculations, accessing databases, or customizing Web sites. But more advanced distributed environments may use a category of software called an application server. The application server (or app server) houses business logic for a distributed system. Individual Web services served up by the app server are programmed to perform different tasks: returning a calculation (“sales tax for your order will be \$11.58”), accessing a database program (“here are the results you searched for”), or even making a request to another server in another organization (“Visa, please verify this customer’s credit card number for me”). Figure 9.6 In this multitiered distributed system, client browsers on various machines (desktop, laptop, mobile) access the system through the Web server. The cash register doesn’t use a Web browser, so instead the cash register logic is programmed to directly access the services it needs from the app server. Web services accessed from the app server may be asked to do a variety of functions, including perform calculations, access corporate databases, or even make requests from servers at other firms (for example, to verify a customer’s credit card). Those little chunks of code that are accessed via the application server are sometimes referred to as Web services. The World Wide Web consortium defines Web services as software systems designed to support interoperable machine-to-machine interaction over a network2. And when computers can talk together (instead of people), this often results in fewer errors, time savings, cost reductions, and can even create whole new ways of doing business! Each Web service defines the standard method for other programs to request it to perform a task and defines the kind of response the calling client can expect back. These standards are referred to as application programming interfaces (APIs). Look at the advantages that Web services bring a firm like Amazon. Using Web services, the firm can allow the same order entry logic to be used by Web browsers, mobile phone applications, or even by third parties who want to access Amazon product information and place orders with the firm (there’s an incentive to funnel sales to Amazon—the firm will give you a cut of any sales that you send Amazon’s way). Organizations that have created a robust set of Web services around their processes and procedures are said to have a service-oriented architecture (SOA). Organizing systems like this, with separate applications in charge of client presentation, business logic, and database, makes systems more flexible. Code can be reused, and each layer can be separately maintained, upgraded, or migrated to new hardware—all with little impact on the others. Web services sound geeky, but here’s a concrete example illustrating their power. Southwest Airlines had a Web site where customers could book flights, but many customers also wanted to rent a car or book a hotel, too. To keep customers on Southwest.com, the firm and its hotel and rental car partners created a set of Web services and shared the APIs. Now customers visiting Southwest.com can book a hotel stay and rental car on the same page where they make their flight reservation. This process transforms Southwest.com into a full service travel destination and allows the site to compete head-to-head with the likes of Expedia, Travelocity, and Orbitz (McCarthy, 2002). Think about why Web services are important from a strategic perspective. By adding hotel and rental car services, Southwest is now able to eliminate the travel agent, along with any fees they might share with the agent. This shortcut allows the firm to capture more profits or pass on savings to customers, securing its position as the first place customers go for low-cost travel. And perhaps most importantly, Southwest can capture key data from visitor travel searches and bookings (something it likely couldn’t do if customers went to a site like Expedia or Travelocity). Data is a hugely valuable asset, and this kind of customer data can be used by Southwest to send out custom e-mail messages and other marketing campaigns to bring customers back to the airline. As geeky as they might at first seem, Web services can be very strategic! Figure 9.7 Southwest.com uses Web services to allow car rental and hotel firms to book services through Southwest. This process transforms Southwest.com into a full-service online travel agent. • Messaging Standards Two additional terms you might hear within the context of distributed computing are EDI and XML. EDI (electronic data interchange) is a set of standards for exchanging information between computer applications. EDI is most often used as a way to send the electronic equivalent of structured documents between different organizations. Using EDI, each element in the electronic document, like a firm name, address, or customer number, is coded so that it can be recognized by the receiving computer program. Eliminating paper documents makes businesses faster and lowers data entry and error costs. One study showed that firms that used EDI decreased their error rates by 82 percent and their cost of producing each document fell by up to 96 percent2. EDI is a very old standard, with roots stretching back to the 1948 Berlin Air Lift. While still in use, a new generation of more-flexible technologies for specifying data standards are taking its place. Chief among the technologies replacing EDI is extensible markup language (XML). XML has lots of uses, but in the context of distributed systems, it allows software developers to create a set of standards for common data elements that, like EDI messages, can be sent between different kinds of computers, different applications, and different organizations. XML is often thought of as easier to code than EDI, and it’s more robust because it can be extended—organizations can create formats to represent any kind of data (e.g., a common part number, photos, the complaint field collected by customer support personnel). In fact, most messages sent between Web services are coded in XML (the technology is a key enabler in mashups, discussed in Chapter 7 “Peer Production, Social Media, and Web 2.0”). Many computer programs also use XML as a way to export and import data in a common format that can be used regardless of the kind of computer hardware, operating system, or application program used. And if you design Web sites, you might encounter XML as part of the coding behind the cascading style sheets (CSS) that help maintain a consistent look and feel to the various Web pages in a given Web site. Rearden Commerce: A Business Built on Web Services Web services, APIs, and open standards not only transform businesses, they can create entire new firms that change how we get things done. For a look at the mashed-up, integrated, hyperautomated possibilities that Web services make possible, check out Rearden Commerce, a Foster City, California, firm that is using this technology to become what AMR’s Chief Research Office referred to as “Travelocity on Steroids.” Using Rearden, firms can offer their busy employees a sort of Web-based concierge/personal assistant. Rearden offers firms a one-stop shop where employees can not only make the flight, car, and hotel bookings they might do from a travel agent, they can also book dinner reservations, sports and theatre tickets, and arrange for business services like conference calls and package shipping. Rearden doesn’t supply the goods and services it sells. Instead it acts as the middleman between transactions. A set of open APIs to its Web services allows Rearden’s one hundred and sixty thousand suppliers to send product and service data to Rearden, and to receive booking and sales data from the site. In this ultimate business mashup, a mobile Rearden user could use her phone to book a flight into a client city, see restaurants within a certain distance of her client’s office, have these locations pop up on a Google map, have listings accompanied by Zagat ratings and cuisine type, book restaurant reservations through Open Table, arrange for a car and driver to meet her at her client’s office at a specific time, and sync up these reservations with her firm’s corporate calendaring systems. If something unexpected comes up, like a flight delay, Rearden will be sure she gets the message. The system will keep track of any cancelled reservation credits, and also records travel reward programs, so Rearden can be used to spend those points in the future. In order to pull off this effort, the Rearden maestros are not only skilled at technical orchestration, but also in coordinating customer and supplier requirements. As TechCrunch’s Erick Schonfeld put it, “The hard part is not only the technology—which is all about integrating an unruly mess of APIs and Web services—[it also involves] signing commercially binding service level agreements with [now over 160,000] merchants across the world.” For its efforts, Rearden gets to keep between 6 percent and 25 percent of every nontravel dollar spent, depending on the service. The firm also makes money from subscriptions, and distribution deals. The firm’s first customers were large businesses and included ConAgra, GlaxoSmithKline, and Motorola. Rearden’s customers can configure the system around special parameters unique to each firm: to favor a specific airline, benefit from a corporate discount, or to restrict some offerings for approved employees only. Rearden investors include JPMorgan Chase and American Express—both of whom offer Rearden to their employees and customers. Even before the consumer version was available, Rearden had over four thousand corporate customers and two million total users, a user base larger than better-known firms like Salesforce.com (Arrington, 2007; Schonfeld, 2008; Arrington, 2009). For all the pizzazz we recognize that, as a start-up, the future of Rearden Commerce remains uncertain; however, the firm’s effective use of Web services illustrates the business possibilities as technologies allow firms to connect with greater ease and efficiency. Connectivity has made our systems more productive and enables entire new strategies and business models. But these wonderful benefits come at the price of increased risk. When systems are more interconnected, opportunities for infiltration and abuse also increase. Think of it this way—each “connection” opportunity is like adding another door to a building. The more doors that have to be defended, the more difficult security becomes. It should be no surprise that the rise of the Internet and distributed computing has led to an explosion in security losses by organizations worldwide. Key Takeaways • Client-server computing is a method of distributed computing where one program (a client) makes a request to be fulfilled by another program (a server). • Server is a tricky term and is sometimes used to refer to hardware. While server-class hardware refers to more powerful computers designed to support multiple users, just about any PC or notebook can be configured to run server software. • Web servers serve up Web sites and can perform some scripting. • Most firms serve complex business logic from an application server. • Isolating a system’s logic in three or more layers (presentation or user interface, business logic, and database) can allow a firm flexibility in maintenance, reusability, and in handling upgrades. • Web services allow different applications to communicate with one another. APIs define the method to call a Web service (e.g., to get it to do something), and the kind of response the calling program can expect back. • Web services make it easier to link applications as distributed systems, and can make it easier for firms to link their systems across organizations. • Popular messaging standards include EDI (older) and XML. Sending messages between machines instead of physical documents can speed processes, drastically cut the cost of transactions, and reduce errors. • Distributed computing can yield enormous efficiencies in speed, error reduction, and cost savings and can create entirely new ways of doing business. • When computers can communicate with each other (instead of people), this often results in fewer errors, time savings, cost reductions, and can even create whole new ways of doing business. • Web services, APIs, and open standards not only transform businesses, they can create entire new firms that change how we get things done. Questions and Exercises 1. Differentiate the term “server” used in a hardware context, from “server” used in a software context. 2. Describe the “client-server” model of distributed computing. What products that you use would classify as leveraging client-server computing? 3. List the advantages that Web services have brought to Amazon. 4. How has Southwest Airlines utilized Web services to its competitive advantage? 5. What is Rearden Commerce and which technologies does it employ? Describe Rearden Technology’s revenue model. Who were Rearden Technology’s first customers? Who were among their first investors? 6. What are the security risks associated with connectivity, the Internet, and distributed processing?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/09%3A_Understanding_Software-_A_Primer_for_Managers/9.04%3A_Distributed_Computing.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand, at a managerial level, what programming languages are and how software is developed. 2. Recognize that an operating system and microprocessor constrain the platform upon which most compiled application software will run. 3. Understand what Java is and why it is significant. 4. Know what scripting languages are. So you’ve got a great idea that you want to express in software—how do you go about creating a program? Programmers write software in a programming language. While each language has its strengths and weaknesses, most commercial software is written in C++ (pronounced “see plus plus”) or C# (pronounced “see sharp”). Visual Basic (from Microsoft) and Java (from Sun) are also among the more popular of the dozens of programming languages available. Web developers may favor specialty languages like Ruby and Python, while languages like SQL are used in databases. Most professional programmers use an integrated development environment (IDE) to write their code. The IDE includes a text editor, a debugger for sleuthing out errors, and other useful programming tools. The most popular IDE for Windows is Visual Studio, while Apple offers the Xcode IDE. Most IDEs can support several different programming languages. The IDE will also compile a programmer’s code, turning the higher-level lines of instructions that are readable by humans into lower-level instructions expressed as the patterns of ones and zeros that are readable by a computer’s microprocessor. Figure 9.8 Microsoft’s Visual Studio IDE supports desktop, server, mobile, and cloud computing software development. Look at the side of a box of commercial software and you’re likely to see system requirements that specify the operating system and processor that the software is designed for (e.g., “this software works on computers with Windows 7 and Intel-compatible processors”). Wouldn’t it be great if software could be written once and run everywhere? That’s the idea behind Java—a programming language developed by Sun Microsystems. Java programmers don’t write code with specific operating system commands (say for Windows, Mac OS X, or Linux), instead they use special Java commands to control their user interface or interact with the display and other hardware. Java programs can run on any computer that has a Java Virtual Machine (JVM), a software layer that interprets Java code so that it can be understood by the operating system and processor of a given computer. Java’s platform independence—the ability for developers to “write once, run everywhere”—is its biggest selling point. Many Web sites execute Java applets to run the animation you might see in advertisements or games. Java has also been deployed on over six billion mobile phones worldwide, and is popular among enterprise programmers who want to be sure their programs can scale from smaller hardware up to high-end supercomputers. As long as the machine receiving the Java code has a JVM, then the Java application should run. However, Java has not been popular for desktop applications. Since Java isn’t optimized to take advantage of interface elements specific to the Mac or Windows, most Java desktop applications look clunky and unnatural. Java code that runs through the JVM interpreter is also slower than code compiled for the native OS and processor that make up a platform1. Scripting languages are the final category of programming tool that we’ll cover. Scripting languages typically execute within an application. Microsoft offers a scripting language called VB Script (a derivative of Visual Basic) to automate functions in Office. And most browsers and Web servers support JavaScript, a language that helps make the Web more interactive (despite its name, JavaScript is unrelated to Java). Scripting languages are interpreted within their applications, rather than compiled to run directly by a microprocessor. This distinction makes them slower than the kinds of development efforts found in most commercial software. But most scripting languages are usually easy to use, and are often used both by professional programmers and power users. Key Takeaways • Programs are often written in a tool called an IDE, an application that includes an editor (a sort of programmer’s word processor), debugger, and compiler, among other tools. • Compiling takes code from the high-level language that humans can understand and converts them into the sets of ones and zeros in patterns representing instructions that microprocessors understand. • Popular programming languages include C++, C#, Visual Basic, and Java. • Most software is written for a platform—a combination of an operating system and microprocessor. • Java is designed to be platform independent. Computers running Java have a separate layer called a Java Virtual Machine that translates (interprets) Java code so that it can be executed on an operating system/processor combination. In theory, Java is “write once, run everywhere,” as opposed to conventional applications that are written for an operating system and compiled for an OS/processor combination. • Java is popular on mobile phones, enterprise computing, and to make Web sites more interactive. Java has never been a successful replacement for desktop applications, largely because user interface differences among the various operating systems are too great to be easily standardized. • Scripting languages are interpreted languages, such as VB Script or Java Script. Many scripting languages execute within an application (like the Office programs, a Web browser, or to support the functions of a Web server). They are usually easier to program, but are less powerful and execute more slowly than compiled languages. Questions and Exercises 1. List popular programming languages. 2. What’s an IDE? Why do programmers use IDEs? Name IDEs popular for Windows and Mac users. 3. What is the difference between a compiled programming language and an interpreted programming language? 4. Name one advantage and one disadvantage of scripting languages. 5. In addition to computers, on what other technology has Java been deployed? Why do you suppose Java is particularly attractive for these kinds of applications? 6. What’s a JVM? Why do you need it? 7. What if a programmer wrote perfect Java code, but there was a bug on the JVM installed on a given computer? What might happen? 8. Why would developers choose to write applications in Java? Why might they skip Java and choose another programming language? 9. Why isn’t Java popular for desktop applications? 10. Go to http://www.java.com. Click on “Do I have Java?” Is Java running on your computer? Which version? 1Some offerings have attempted to overcome the speed issues associated with interpreting Java code. Just-in-time compilation stores code in native processor-executable form after each segment is initially interpreted, further helping to speed execution. Other environments allow for Java to be compiled ahead of time so that it can be directly executed by a microprocessor. However, this process eliminates code portability—Java’s key selling point. And developers preparing their code for the JVM actually precompile code into something called Java bytecode, a format that’s less human friendly but more quickly interpreted by JVM software.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/09%3A_Understanding_Software-_A_Primer_for_Managers/9.05%3A_Writing_Software.txt
Learning Objectives After studying this section you should be able to do the following: 1. List the different cost categories that comprise total cost of ownership. 2. Understand that once a system is implemented, the costs of maintaining and supporting the system continue. 3. List the reasons that technology development projects fail and the measures that can be taken to increase the probability of success. Managers should recognize that there are a whole host of costs that are associated with creating and supporting an organization’s information systems. Of course, there are programming costs for custom software as well as purchase, configuration, and licensing costs for packaged software, but there’s much, much more. There are costs associated with design and documentation (both for programmers and for users). There are also testing costs. New programs should be tested thoroughly across the various types of hardware the firm uses, and in conjunction with existing software and systems, before being deployed throughout the organization. Any errors that aren’t caught can slow down a business or lead to costly mistakes that could ripple throughout an organization and its partners. Studies have shown that errors not caught before deployment could be one hundred times more costly to correct than if they were detected and corrected beforehand (Charette, 2005). Once a system is “turned on,” the work doesn’t end there. Firms need to constantly engage in a host of activities to support the system that may also include the following: • providing training and end user support • collecting and relaying comments for system improvements • auditing systems to ensure compliance (i.e., that the system operates within the firm’s legal constraints and industry obligations) • providing regular backup of critical data • planning for redundancy and disaster recovery in case of an outage • vigilantly managing the moving target of computer security issues With so much to do, it’s no wonder that firms spend 70 to 80 percent of their information systems (IS) budgets just to keep their systems running (Rettig, 2007). The price tag and complexity of these tasks can push some managers to think of technology as being a cost sink rather than a strategic resource. These tasks are often collectively referred to as the total cost of ownership (TCO) of an information system. Understanding TCO is critical when making technology investment decisions. TCO is also a major driving force behind the massive tech industry changes discussed in Chapter 10 “Software in Flux: Partly Cloudy and Sometimes Free”. • Why Do Technology Projects Fail? Even though information systems represent the largest portion of capital spending at most firms, an astonishing one in three technology development projects fail to be successfully deployed (Dignan, 2007). Imagine if a firm lost its investment in one out of every three land purchases, or when building one in three factories. These statistics are dismal! Writing in IEEE Spectrum, risk consultant Robert Charette provides a sobering assessment of the cost of software failures, stating, “The yearly tab for failed and troubled software conservatively runs somewhere from \$60 to \$70 billion in the United States alone. For that money, you could launch the space shuttle one hundred times, build and deploy the entire 24-satellite Global Positioning System, and develop the Boeing 777 from scratch—and still have a few billion left over” (Charette, 2005). Why such a bad track record? Sometimes technology itself is to blame, other times it’s a failure to test systems adequately, and sometimes it’s a breakdown of process and procedures used to set specifications and manage projects. In one example, a multimillion-dollar loss on the NASA Mars Observer was traced back to a laughably simple oversight—Lockheed Martin contractors using English measurements, while the folks at NASA used the metric system (Lloyd, 1999). Yes, a \$125 million taxpayer investment was lost because a bunch of rocket scientists failed to pay attention to third grade math. When it comes to the success or failure of technical projects, the devil really is in the details. Projects rarely fail for just one reason. Project post-mortems often point to a combination of technical, project management, and business decision blunders. The most common factors include the following2: • Unrealistic or unclear project goals • Poor project leadership and weak executive commitment • Inaccurate estimates of needed resources • Badly defined system requirements and allowing “feature creep” during development • Poor reporting of the project’s status • Poor communication among customers, developers, and users • Use of immature technology • Unmanaged risks • Inability to handle the project’s complexity • Sloppy development and testing practices • Poor project management • Stakeholder politics • Commercial pressures (e.g., leaving inadequate time or encouraging corner-cutting) Managers need to understand the complexity involved in their technology investments, and that achieving success rarely lies with the strength of the technology alone. But there is hope. Information systems organizations can work to implement procedures to improve the overall quality of their development practices. Mechanisms for quality improvement include capability maturity model integration (CMMI), which gauge an organization’s process maturity and capability in areas critical to developing and deploying technology projects, and provides a carefully chosen set of best practices and guidelines to assist quality and process improvement1 (Kay, 2005). Firms are also well served to leverage established project planning and software development methodologies that outline critical businesses processes and stages when executing large-scale software development projects. The idea behind these methodologies is straightforward—why reinvent the wheel when there is an opportunity to learn from and follow blueprints used by those who have executed successful efforts. When methodologies are applied to projects that are framed with clear business goals and business metrics, and that engage committed executive leadership, success rates can improve dramatically (Shenhar & Dvir, 2007). While software development methodologies are the topic of more advanced technology courses, the savvy manager knows enough to inquire about the development methodologies and quality programs used to support large scale development projects, and can use these investigations as further input when evaluating whether those overseeing large scale efforts have what it takes to get the job done. Key Takeaways • The care and feeding of information systems can be complex and expensive. The total cost of ownership of systems can include software development and documentation, or the purchase price and ongoing license and support fees, plus configuration, testing, deployment, maintenance, support, training, compliance auditing, security, backup, and provisions for disaster recovery. These costs are collectively referred to as TCO, or a system’s total cost of ownership. • Information systems development projects fail at a startlingly high rate. Failure reasons can stem from any combination of technical, process, and managerial decisions. • IS organizations can leverage software development methodologies to improve their systems development procedures, and firms can strive to improve the overall level of procedures used in the organization through models like CMMI. However, it’s also critical to engage committed executive leadership in projects, and to frame projects using business metrics and outcomes to improve the chance of success. • System errors that aren’t caught before deployment can slow down a business or lead to costly mistakes that could ripple throughout an organization. Studies have shown that errors not caught before deployment could be 100 times more costly to correct than if they were detected and corrected beforehand. • Firms spend 70 to 80 percent of their IS budgets just to keep their systems running. • One in three technology development projects fail to be successfully deployed. • IS organizations can employ project planning and software development methodologies to implement procedures to improve the overall quality of their development practices. Questions and Exercises 1. List the types of total ownership costs associated with creating and supporting an organization’s information systems. 2. On average, what percent of firms’ IS budgets is spent to keep their systems running? 3. What are the possible effects of not detecting and fixing major system errors before deployment? 4. List some of the reasons for the failure of technology development projects. 5. What is the estimated yearly cost of failed technology development projects? 6. What was the reason attributed to the failure of the NASA Mars Observer project? 7. What is capability maturity model integration (CMMI) and how is it used to improve the overall quality of a firm’s development practices? 8. Perform an Internet search for “IBM Rational Portfolio Manager.” How might IBM’s Rational Portfolio Manager software help companies realize more benefit from their IT systems development project expenditures? What competing versions of this product offered by other organizations?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/09%3A_Understanding_Software-_A_Primer_for_Managers/9.06%3A_Total_Cost_of_Ownership_%28TCO%29-_Tech_Costs_Go_Way_beyond_the_Price_Tag.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand how low marginal costs, network effects, and switching costs have combined to help create a huge and important industry. 2. Recognize that the software industry is undergoing significant and broadly impactful change brought about by several increasingly adopted technologies including open source software, cloud computing, and software-as-a-service. For many, software has been a magnificent business. It is the two-hundred-billion-dollar-per-year juggernaut (Kirkpatrick, 2004) that placed Microsoft’s Bill Gates and Oracle’s Larry Ellison among the wealthiest people in the world. Once a successful software product has been written, the economics for a category-leading offering are among the best you’ll find in any industry. Unlike physical products assembled from raw materials, the marginal cost to produce an additional copy of a software product is effectively zero. Just duplicate, no additional input required. That quality leads to businesses that can gush cash. Microsoft generates one and a half billion dollars a month from Windows and Office alone (Vogelstein, 2006). Network effects and switching cost can also offer a leading software firm a degree of customer preference and lock in that can establish a firm as a standard, and in many cases creates winner-take-all (or at least winner-take-most) markets. But as great as the business has been, the fundamental model powering the software industry is under assault. Open source software (OSS) offerings—free alternatives where anyone can look at and potentially modify a program’s code—pose a direct challenge to the assets and advantages cultivated by market leaders. Giants shudder—“How can we compete with free,” while others wonder, “How can we make money and fuel innovation on free?” And if free software wasn’t enough of a shock, the way firms and users think about software is also changing. A set of services referred to as cloud computing is making it more common for a firm to move software out of its own IS shop so that it is run on someone else’s hardware. In one variant of this approach known as software as a service (SaaS), users access a vendor’s software over the Internet, usually by simply starting up a Web browser. With SaaS, you don’t need to own the program or install it on your own computer. Hardware clouds can let firms take their software and run it on someone else’s hardware—freeing them from the burden of buying, managing, and maintaining the physical computing that programs need. Another software technology called virtualization can make a single computer behave like many separate machines. This function helps consolidate computing resources and creates additional savings and efficiencies. These transitions are important. They mean that smaller firms have access to the kinds of burly, sophisticated computing power than only giants had access to in the past. Start-ups can scale quickly and get up and running with less investment capital. Existing firms can leverage these technologies to reduce costs. Got tech firms in your investment portfolio? Understanding what’s at work here can inform decisions you make on which stocks to buy or sell. If you make tech decisions for your firm or make recommendations for others, these trends may point to which firms have strong growth and sustainability ahead, or which may be facing troubled times. Key Takeaways • The software business is attractive due to near-zero marginal costs and an opportunity to establish a standard—creating the competitive advantages of network effects and switching costs. • New trends in the software industry, including open source software (OSS), hardware clouds, software as a service (SaaS), and virtualization are creating challenges and opportunity across tech markets. Understanding the impact of these developments can help a manager make better technology choices and investment decisions. Questions and Exercises 1. What major trends, outlined in the section above, are reshaping how we think about software? What industries and firms are potentially impacted by these changes? Why do managers, investors, and technology buyers care about these changes? 2. Which organizations might benefit from these trends? Which might be threatened? Why? 3. What are marginal costs? Are there other industries that have cost economics similar to the software industry? 4. Investigate the revenues and net income of major software players: Microsoft, Google, Oracle, Red Hat, and Salesforce.com. Which firms have higher revenues? Net income? Which have better margins? What do the trends in OSS, SaaS, and cloud computing suggest for these and similar firms? 5. How might the rise of OSS, SaaS, and cloud computing impact hardware sales? How might it impact entrepreneurship and smaller businesses? 10.02: Open Source Learning Objectives After studying this section you should be able to do the following: 1. Define open source software and understand how it differs from conventional offerings. 2. Provide examples of open source software and how firms might leverage this technology. Who would have thought a twenty-one-year-old from Finland could start a revolution that continues to threaten the Microsoft Windows empire? But Linus Torvalds did just that. During a marathon six-month coding session, Torvalds created the first version of Linux (Diamond, 2008) marshalling open source revolutionaries like no one before him. Instead of selling his operating system, Torvalds gave it away. Now morphed and modified into scores of versions by hundreds of programmers, Linux can be found just about everywhere, and most folks credit Linux as being the most significant product in the OSS arsenal. Today Linux powers everything from cell phones to stock exchanges, set top boxes to supercomputers. You’ll find the OS on 30 percent of the servers in corporate America (Lacy, 2006), and supporting most Web servers (including those at Google, Amazon, and Facebook). Linux forms the core of the TiVo operating system, it underpins Google’s Android and Chrome OS offerings, and it has even gone interplanetary. Linux has been used to power the Phoenix Lander and to control the Spirit and Opportunity Mars rovers (Brockmeier, 2004; Barrett, 2008). Yes, Linux is even on Mars! How Do You Pronounce Linux? Most English speakers in the know pronounce Linux in a way that rhymes with “cynics.” You can easily search online to hear video and audio clips of Linus (whose name is actually pronounced “Lean-us” in Finish) pronouncing the name of his OS. In deference to Linux, some geeks prefer something that sounds more like “lean-ooks.”1 Just don’t call it “line-ucks,” or the tech-savvy will think you’re an open source n00b! Oh yeah, and while we’re on the topic of operating system pronunciation, the Macintosh operating system OS X is pronounced “oh es ten.” Figure 10.1 Tux, the Linux Mascot Andrés Álvarez Iglesias – linux-logo – CC BY 2.0.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/10%3A_Software_in_Flux-_Partly_Cloudy_and_Sometimes_Free/10.01%3A_Introduction.txt
Learning Objectives After studying this section you should be able to do the following: 1. Know the primary reasons firms choose to use OSS. 2. Understand how OSS can beneficially impact industry and government. There are many reasons why firms choose open source products over commercial alternatives: Cost—Free alternatives to costly commercial code can be a tremendous motivator, particularly since conventional software often requires customers to pay for every copy used and to pay more for software that runs on increasingly powerful hardware. Big Lots stores lowered costs by as much as \$10 million by finding viable OSS (Castelluccio, 2008) to serve their system needs. Online broker E*TRADE estimates that its switch to open source helped save over \$13 million a year (King, 2008). And Amazon claimed in SEC filings that the switch to open source was a key contributor to nearly \$20 million in tech savings (Shankland, et. al., 2001). Firms like TiVo, which use OSS in their own products, eliminate a cost spent either developing their own operating system or licensing similar software from a vendor like Microsoft. Reliability—There’s a saying in the open source community, “Given enough eyeballs, all bugs are shallow” (Raymond, 1999). What this means is that the more people who look at a program’s code, the greater the likelihood that an error will be caught and corrected. The open source community harnesses the power of legions of geeks who are constantly trawling OSS products, looking to squash bugs and improve product quality. And studies have shown that the quality of popular OSS products outperforms proprietary commercial competitors (Ljungberg, 2000). In one study, Carnegie Mellon University’s Cylab estimated the quality of Linux code to be less buggy than commercial alternatives by a factor of two hundred (Castelluccio, 2008)! Security—OSS advocates also argue that by allowing “many eyes” to examine the code, the security vulnerabilities of open source products come to light more quickly and can be addressed with greater speed and reliability (Wheeler, 2003). High profile hacking contests have frequently demonstrated the strength of OSS products. In one well-publicized 2008 event, laptops running Windows and Macintosh were both hacked (the latter in just two minutes), while a laptop running Linux remained uncompromised (McMillan, 2008). Government agencies and the military often appreciate the opportunity to scrutinize open source efforts to verify system integrity (a particularly sensitive issue among foreign governments leery of legislation like the USA PATRIOT Act of 2001) (Lohr, 2003). Many OSS vendors offer security focused (sometimes called hardened) versions of their products. These can include systems that monitor the integrity of an OSS distribution, checking file size and other indicators to be sure that code has not been modified and redistributed by bad guys who’ve added a back door, malicious routines, or other vulnerabilities. Scalability—Many major OSS efforts can run on everything from cheap commodity hardware to high-end supercomputing. Scalability allows a firm to scale from start-up to blue chip without having to significantly rewrite their code, potentially saving big on software development costs. Not only can many forms of OSS be migrated to more powerful hardware, packages like Linux have also been optimized to balance a server’s workload among a large number of machines working in tandem. Brokerage firm E*TRADE claims that usage spikes following 2008 U.S. Federal Reserve moves flooded the firm’s systems, creating the highest utilization levels in five years. But E*TRADE credits its scalable open source systems for maintaining performance while competitors’ systems struggled (King, 2008). Agility and Time to Market—Vendors who use OSS as part of product offerings may be able to skip whole segments of the software development process, allowing new products to reach the market faster than if the entire software system had to be developed from scratch, in-house. Motorola has claimed that customizing products built on OSS has helped speed time-to-market for the firm’s mobile phones, while the team behind the Zimbra e-mail and calendar effort built their first product in just a few months by using some forty blocks of free code (Guth, 2006). Key Takeaways • The most widely cited benefits of using OSS include low cost; increased reliability; improved security and auditing; system scalability; and helping a firm improve its time to market. • Free OSS has resulted in cost savings for many large companies in several industries. • OSS often has fewer bugs than its commercial counterparts due to the large number of persons who have looked at the code. • The huge exposure to scrutiny by developers and other people helps to strengthen the security of OSS. • “Hardened” versions of OSS products often include systems that monitor the integrity of an OSS distribution, checking file size and other indicators to be sure that code has not been modified and redistributed by bad guys who have added a back door, malicious routines, or other vulnerabilities. • OSS can be easily migrated to more powerful computers as circumstances dictate, and also can balance workload by distributing work over a number of machines. • Vendors who use OSS as part of product offerings may be able to skip whole segments of the software development process, allowing new products to reach the market faster. Questions and Exercises 1. What advantages does OSS offer TiVo? What alternatives to OSS might the firm consider and why do you suppose the firm decided on OSS? 2. What’s meant by the phrase, “Given enough eyeballs, all bugs are shallow”? Provide evidence that the insight behind this phrase is an accurate one. 3. How has OSS benefited E*TRADE? Amazon? Motorola? Zimbra? What benefits were achieved in each of these examples? 4. Describe how OSS provides a firm with scalability. What does this mean, and why does this appeal to a firm? What issues might a firm face if chosen systems aren’t scalable? 5. The Web site NetCraft (http://www.netcraft.com) is one of many that provide a tool to see the kind of operating system and Web server software that a given site is running. Visit NetCraft or a similar site and enter the address of some of your favorite Web sites. How many run open source products (e.g., the Linux OS or Apache Web server)? Do some sites show their software as “unknown”? Why might a site be reluctant to broadcast the kind of software that it uses? 10.04: Examples of Open Source Software Learning Objectives After studying this section you should be able to do the following: 1. Recognize that just about every type of commercial product has an open source equivalent. 2. Be able to list commercial products and their open source competitors. Just about every type of commercial product has an open source equivalent. SourceForge.net lists over two hundred and thirty thousand such products1! Many of these products come with the installation tools, support utilities, and full documentation that make them difficult to distinguish from traditional commercial efforts (Woods, 2008). In addition to the LAMP products, some major examples include the following: • Firefox—a Web browser that competes with Internet Explorer • OpenOffice—a competitor to Microsoft Office • Gimp—a graphic tool with features found in Photoshop • Alfresco—collaboration software that competes with Microsoft Sharepoint and EMC’s Documentum • Marketcetera—an enterprise trading platform for hedge fund managers that competes with FlexTrade and Portware • Zimbra—open source e-mail software that competes with Outlook server • MySQL, Ingres, and EnterpriseDB—open source database software packages that each go head-to-head with commercial products from Oracle, Microsoft, Sybase, and IBM • SugarCRM—customer relationship management software that competes with Salesforce.com and Siebel • Asterix—an open source implementation for running a PBX corporate telephony system that competes with offerings from Nortel and Cisco, among others • Free BSD and Sun’s OpenSolaris—open source versions of the Unix operating system Key Takeaways • There are thousands of open source products available, covering nearly every software category. Many have a sophistication that rivals commercial software products. • Not all open source products are contenders. Less popular open source products are not likely to attract the community of users and contributors necessary to help these products improve over time (again we see network effects are a key to success—this time in determining the quality of an OSS effort). • Just about every type of commercial product has an open source equivalent. Questions and Exercises 1. Visit http://www.SourceForge.net. Make a brief list of commercial product categories that an individual or enterprise might use. Are there open source alternatives for these categories? Are well-known firms leveraging these OSS offerings? Which commercial firms do they compete with? 2. Are the OSS efforts you identified above provided by commercial firms, nonprofit organizations, or private individuals? Does this make a difference in your willingness to adopt a particular product? Why or why not? What other factors influence your adoption decision? 3. Download a popular, end-user version of an OSS tool that competes with a desktop application that you own, or that you’ve used (hint: choose something that’s a smaller file or easy to install). What do you think of the OSS offering compared to the commercial product? Will you continue to use the OSS product? Why or why not?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/10%3A_Software_in_Flux-_Partly_Cloudy_and_Sometimes_Free/10.03%3A_Why_Open_Source.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand the disproportional impact OSS has on the IT market. 2. Understand how vendors make money on open source. 3. Know what SQL and MySQL are. Open source is a sixty-billion-dollar industry (Asay, 2008), but it has a disproportionate impact on the trillion-dollar IT market. By lowering the cost of computing, open source efforts make more computing options accessible to smaller firms. More reliable, secure computing also lowers costs for all users. OSS also diverts funds that firms would otherwise spend on fixed costs, like operating systems and databases, so that these funds can be spent on innovation or other more competitive initiatives. Think about Google, a firm that some estimate has over 1.4 million servers. Imagine the costs if it had to license software for each of those boxes! Commercial interest in OSS has sparked an acquisition binge. Red Hat bought open source application server firm JBoss for \$350 million. Novell snapped up SUSE Linux for \$210 million. And Sun plunked down over \$1 billion for open source database provider MySQL (Greenberg, 2008). And with Oracle’s acquisition of Sun, one of the world’s largest commercial software firms has zeroed in on one of the deepest portfolios of open source products. But how do vendors make money on open source? One way is by selling support and consulting services. While not exactly Microsoft money, Red Hat, the largest purely OSS firm, reported half a billion dollars in revenue in 2008. The firm had two and a half million paid subscriptions offering access to software updates and support services (Greenberg, 2008). Oracle, a firm that sells commercial ERP and database products, provides Linux for free, selling high-margin Linux support contracts for as much as five hundred thousand dollars (Fortt, 2007). The added benefit for Oracle? Weaning customers away from Microsoft—a firm that sells many products that compete head-to-head with Oracle’s offerings. Service also represents the most important part of IBM’s business. The firm now makes more from services than from selling hardware and software (Robertson, 2009). And every dollar saved on buying someone else’s software product means more money IBM customers can spend on IBM computers and services. Sun Microsystems was a leader in OSS, even before the Oracle acquisition bid. The firm has used OSS to drive advanced hardware sales, but the firm also sells proprietary products that augment its open source efforts. These products include special optimization, configuration management, and performance tools that can tweak OSS code to work its best (Preimesberger, 2008). Here’s where we also can relate the industry’s evolution to what we’ve learned about standards competition in our earlier chapters. In the pre-Linux days, nearly every major hardware manufacturer made its own, incompatible version of the Unix operating system. These fractured, incompatible markets were each so small that they had difficulty attracting third-party vendors to write application software. Now, much to Microsoft’s dismay, all major hardware firms run Linux. That means there’s a large, unified market that attracts software developers who might otherwise write for Windows. To keep standards unified, several Linux-supporting hardware and software firms also back the Linux Foundation, the nonprofit effort where Linus Torvalds serves as a fellow, helping to oversee Linux’s evolution. Sharing development expenses in OSS has been likened to going in on a pizza together. Everyone wants a pizza with the same ingredients. The pizza doesn’t make you smarter or better. So why not share the cost of a bigger pie instead of buying by the slice (Cohen, 2008)? With OSS, hardware firms spend less money than they would in the brutal, head-to-head competition where each once offered a “me too” operating system that was incompatible with rivals but offered little differentiation. Hardware firms now find their technical talent can be deployed in other value-added services mentioned above: developing commercial software add-ons, offering consulting services, and enhancing hardware offerings. Linux on the Desktop? While Linux is a major player in enterprise software, mobile phones, and consumer electronics, the Linux OS can only be found on a tiny fraction of desktop computers. There are several reasons for this. Some suggest Linux simply isn’t as easy to install and use as Windows or the Mac OS. This complexity can raise the total cost of ownership (TCO) of Linux desktops, with additional end-user support offsetting any gains from free software. The small number of desktop users also dissuades third party firms from porting popular desktop applications over to Linux. For consumers in most industrialized nations, the added complexity and limited desktop application availability of desktop Linux just it isn’t worth the one to two hundred dollars saved by giving up Windows. But in developing nations where incomes are lower, the cost of Windows can be daunting. Consider the OLPC, Nicholas Negroponte’s “one-hundred-dollar” laptop. An additional one hundred dollars for Windows would double the target cost for the nonprofit’s machines. It is not surprising that the first OLPC laptops ran Linux. Microsoft recognizes that if a whole generation of first-time computer users grows up without Windows, they may favor open source alternatives years later when starting their own businesses. As a result, Microsoft has begun offering low-cost versions of Windows (in some cases for as little as seven dollars) in nations where populations have much lower incomes. Microsoft has even offered a version of Windows to the backers of the OLPC. While Microsoft won’t make much money on these efforts, the low cost versions will serve to entrench Microsoft products as standards in emerging markets, staving off open source rivals and positioning the firm to raise prices years later when income levels rise. MySQL: Turning a Ten-Billion-Dollars-a-Year Business into a One-Billion-Dollar One Finland is not the only Scandinavian country to spawn an open source powerhouse. Uppsala Sweden’s MySQL (pronounced “my sequel”) is the “M” in the LAMP stack, and is used by organizations as diverse as FedEx, Lufthansa, NASA, Sony, UPS, and YouTube. The “SQL” in name stands for the structured query language, a standard method for organizing and accessing data. SQL is also employed by commercial database products from Oracle, Microsoft, and Sybase. Even Linux-loving IBM uses SQL in its own lucrative DB2 commercial database product. Since all of these databases are based on the same standard, switching costs are lower, so migrating from a commercial product to MySQL’s open source alternative is relatively easy. And that spells trouble for commercial firms. Granted, the commercial efforts offer some bells and whistles that MySQL doesn’t yet have, but those extras aren’t necessary in a lot of standard database use. Some organizations, impressed with MySQL’s capabilities, are mandating its use on all new development efforts, attempting to cordon off proprietary products in legacy code that is maintained but not expanded. Savings from using MySQL can be huge. The Web site PriceGrabber pays less than ten thousand dollars in support for MySQL compared to one hundred thousand to two hundred thousand dollars for a comparable Oracle effort. Lycos Europe switched from Oracle to MySQL and slashed costs from one hundred twenty thousand dollars a year to seven thousand dollars. And the travel reservation firm Sabre used open source products such as MySQL to slash ticket purchase processing costs by 80 percent (Lyons, 2004). MySQL does make money, just not as much as its commercial rivals. While you can download a version of MySQL over the Net, the flagship product also sells for four hundred ninety-five dollars per server computer compared to a list price for Oracle that can climb as high as one hundred sixty thousand dollars. Of the roughly eleven million copies of MySQL in use, the company only gets paid for about one in a thousand (Ricadela, 2007). Firms pay for what’s free for one of two reasons: (1) for MySQL service, and (2) for the right to incorporate MySQL’s code into their own products (Kirkpatrick, 2004). Amazon, Facebook, Gap, NBC, and Sabre pay MySQL for support; Cisco, Ericsson, HP, and Symantec pay for the rights to the code (Ricadela, 2007). Top-level round-the-clock support for MySQL for up to fifty servers is fifty thousand dollars a year, still a fraction of the cost for commercial alternatives. Founder Marten Mickos has stated an explicit goal of the firm is “turning the \$10-billion-a-year database business into a \$1 billion one” (Kirkpatrick, 2004). When Sun Microsystems spent over \$1 billion to buy Mickos’ MySQL in 2008, Sun CEO Jonathan Schwartz called the purchase the “most important acquisition in the company’s history” (Shankland, 2008). Sun hoped the cheap database software could make the firm’s hardware offerings seem more attractive. And it looked like Sun was good for MySQL, with the product’s revenues growing 55 percent in the year after the acquisition (Asay, 2009). But here’s where it gets complicated. Sun also had a lucrative business selling hardware to support commercial ERP and database software from Oracle. That put Sun and partner Oracle in a relationship where they were both competitors and collaborators (the “coopetition” or “frenemies” phenomenon mentioned in Chapter 6 “Understanding Network Effects”). Then in spring 2009, Oracle announced it was buying Sun. Oracle CEO Larry Ellison mentioned acquiring the Java language was the crown jewel of the purchase, but industry watchers have raised several questions. Will the firm continue to nurture MySQL and other open source products, even as this software poses a threat to its bread-and-butter database products? Will the development community continue to back MySQL as the de facto standard for open source SQL databases, or will they migrate to an alternative? Or will Oracle find the right mix of free and fee-based products and services that allow MySQL to thrive while Oracle continues to grow? The implications are serious for investors, as well as firms that have made commitments to Sun, Oracle, and MySQL products. The complexity of this environment further demonstrates why technologists need business savvy and market monitoring skills and why business folks need to understand the implications of technology and tech-industry developments. Legal Risks and Open Source Software: A Hidden and Complex Challenge Open source software isn’t without its risks. Competing reports cite certain open source products as being difficult to install and maintain (suggesting potentially higher total cost of ownership, or TCO). Adopters of OSS without support contracts may lament having to rely on an uncertain community of volunteers to support their problems and provide innovative upgrades. Another major concern is legal exposure. Firms adopting OSS may be at risk if they distribute code and aren’t aware of the licensing implications. Some commercial software firms have pressed legal action against the users of open source products when there is a perceived violation of software patents or other unauthorized use of their proprietary code. For example, in 2007 Microsoft suggested that Linux and other open source software efforts violated some two hundred thirty-five of its patents (Ricadela, 2007). The firm then began collecting payments and gaining access to the patent portfolios of companies that use the open source Linux operating system in their products, including Fuji, Samsung, and Xerox. Microsoft also cut a deal with Linux vendor Novell in which both firms pledged not to sue each other’s customers for potential patent infringements. Also complicating issues are the varying open source license agreements (these go by various names, such as GPL and the Apache License), each with slightly different legal provisions—many of which have evolved over time. Keeping legal with so many licensing standards can be a challenge, especially for firms that want to bundle open source code into their own products (Lacy, 2006). An entire industry has sprouted up to help firms navigate the minefield of open source legal licenses. Chief among these are products, such as those offered by the firm Black Duck, which analyze the composition of software source code and report on any areas of concern so that firms can honor any legal obligations associated with their offerings. Keeping legal requires effort and attention, even in an environment where products are allegedly “free.” This also shows that even corporate lawyers had best geek-up if they want to prove they’re capable of navigating a twenty-first-century legal environment. Key Takeaways • Business models for firms in the open source industry are varied, and can include selling services, licensing OSS for incorporation into commercial products, and using OSS to fuel hardware sales. • Many firms are trying to use OSS markets to drive a wedge between competitors and their customers. • Linux has been very successful on mobile devices and consumer electronics, as well as on high-end server class and above computers. But it has not been as successful on the desktop. The small user base for desktop Linux makes the platform less attractive for desktop software developers. Incompatibility with Windows applications, switching costs, and other network effects-related issues all suggest that Desktop Linux has an uphill climb in more mature markets. • MySQL is the dominant open source database software product. Adoption of the SQL standard eases some issues with migrating from commercial products to MySQL. • OSS also has several drawbacks and challenges that limit its appeal. These include complexity of some products and a higher total cost of ownership for some products, concern about the ability of a product’s development community to provide support or product improvement, and legal and licensing concerns. Questions and Exercises 1. Describe the impact of OSS on the IT market. 2. Show your understanding of the commercial OSS market. How do Red Hat, Oracle, Oracle’s Sun division, and IBM make money via open source? 3. Visit Mozilla.org. Which open source products does this organization develop? Investigate how development of these efforts is financed. How does this organization differ from the ones mentioned above? 4. What is the Linux Foundation? Why is it necessary? Which firms are members, underwriting foundation efforts? 5. List the reasons why Linux is installed on only a very small fraction of desktop computers. Are there particular categories of products or users who might see Linux as more appealing than conventional operating systems? Do you think Linux’s share of the desktop market will increase? Why or why not? 6. How is Microsoft combating the threat of open source software and other free tools that compete with its commercial products? 7. What is the dominant open source database software product? Which firms use this product? Why? 8. Which firm developed the leading OSS database product? Do you think it’s more or less likely that a firm would switch to an OSS database instead of an OSS office suite or desktop alternative? Why or why not? 9. How has stewardship of the leading OSS database effort changed in recent years? Who oversees the effort today? What questions does this raise for the product’s future? Although this book is updated regularly, current events continue to change after publication of this chapter. Investigate the current status of this effort—reaction of the developer community, continued reception of the product—and be prepared to share your findings with class. 10. List some of the risks associated with using OSS. Give examples of firms that might pass on OSS software, and explain why.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/10%3A_Software_in_Flux-_Partly_Cloudy_and_Sometimes_Free/10.05%3A_Why_Give_It_Away_The_Business_of_Open_Source.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand the concept of cloud computing. 2. Identify the two major categories of cloud computing. Oracle Chairman Larry Ellison, lamenting the buzzword-chasing character of the tech sector, once complained that the computer industry is more fashion-focused than even the women’s clothing business (Farber, 2008). Ellison has a point: when a technology term becomes fashionable, the industry hype machine shifts into overdrive. The technology attracts press attention, customer interest, and vendor marketing teams scramble to label their products and services as part of that innovation. Recently, few tech trends have been more fashionable than cloud computing. Like Web 2.0, trying to nail down an exact definition for cloud computing is tough. In fact, it’s been quite a spectacle watching industry execs struggle to clarify the concept. HP’s Chief Strategy Office “politely refused” when asked by BusinessWeek to define the term cloud computing (Hamm, 2008). Richard Stallman, founder of the Free Software Foundation said about cloud computing, “It’s worse than stupidity. It’s a marketing hype campaign” (McKay, 2009). And Larry Ellison, always ready with a sound bite, offered up this priceless quip, “Maybe I’m an idiot, but I have no idea what anyone is talking about. What is it? It’s complete gibberish. It’s insane” (Lyons, 2008). Insane, maybe, but also big bucks. By year-end 2008, the various businesses that fall under the rubric of cloud computing had already accounted for an estimated thirty-six-billion-dollar market. That represents a whopping 13 percent of global software sales (Liedtke, 2008)! When folks talk about cloud computing they’re really talking about replacing computing resources—either an organization’s or an individual’s hardware or software—with services provided over the Internet. The name actually comes from the popular industry convention of drawing the Internet or other computer network as a big cloud. Cloud computing encompasses a bunch of different efforts. We’ll concentrate on describing, providing examples, and analyzing the managerial implications of two separate categories of cloud computing: (1) software as a service (SaaS), where a firm subscribes to a third-party software-replacing service that is delivered online, and (2) models often referred to as utility computing, platform as a service, or infrastructure as a service. Using these latter techniques, an organization develops its own systems, but runs them over the Internet on someone else’s hardware. A later section on virtualization will discuss how some organizations are developing their own private clouds, pools of computing resources that reside inside an organization and that can be served up for specific tasks as need arrives. The benefits and risks of SaaS and the utility computing-style efforts are very similar, but understanding the nuances of each effort can help you figure out if and when the cloud makes sense for your organization. The evolution of cloud computing also has huge implications across the industry: from the financial future of hardware and software firms, to cost structure and innovativeness of adopting organizations, to the skill sets likely to be most valued by employers. Key Takeaways • Cloud computing is difficult to define. Managers and techies use the term cloud computing to describe computing services provided over a network, most often commercial services provided over the Internet by a third party that can replace or offload tasks that would otherwise run on a user or organization’s existing hardware or software. • Software as a service (SaaS) refers to a third-party software-replacing service that is delivered online. • Hardware cloud computing services replace hardware that a firm might otherwise purchase. • Estimated to be a thirty-six-billion-dollar industry, cloud computing is reshaping software, hardware, and service markets, and is impacting competitive dynamics across industries. Questions and Exercises 1. Identify and contrast the two categories of cloud computing. 2. Define cloud computing.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/10%3A_Software_in_Flux-_Partly_Cloudy_and_Sometimes_Free/10.06%3A_Cloud_Computing-_Hype_or_Hope.txt
Learning Objectives After studying this section you should be able to do the following: 1. Know how firms using SaaS products can dramatically lower several costs associated with their information systems. 2. Know how SaaS vendors earn their money. 3. Be able to list the benefits to users that accrue from using SaaS. 4. Be able to list the benefits to vendors from deploying SaaS. If open source isn’t enough of a threat to firms that sell packaged software, a new generation of products, collectively known as SaaS, claims that you can now get the bulk of your computing done through your Web browser. Don’t install software—let someone else run it for you and deliver the results over the Internet. Software as a service (SaaS) refers to software that is made available by a third party online. You might also see the terms ASP (application service provider) or HSV (hosted software vendor) used to identify this type of offering. SaaS is potentially a very big deal. Firms using SaaS products can dramatically lower several costs associated with the care and feeding of their information systems, including software licenses, server hardware, system maintenance, and IT staff. Most SaaS firms earn money via a usage-based pricing model akin to a monthly subscription. Others offer free services that are supported by advertising, while others promote the sale of upgraded or premium versions for additional fees. Make no mistake, SaaS is yet another direct assault on traditional software firms. The most iconic SaaS firm is Salesforce.com, an enterprise customer relationship management (CRM) provider. This “un-software” company even sports a logo featuring the word “software” crossed out, Ghostbusters-style (Hempel, 2009). Figure 10.3 The antisoftware message is evident in the logo of SaaS leader Salesforce.com. Other enterprise-focused SaaS firms compete directly with the biggest names in software. Some of these upstarts are even backed by leading enterprise software executives. Examples include NetSuite (funded in part by Oracle’s Larry Ellison—the guy’s all over this chapter), which offers a comprehensive SaaS ERP suite; and Workday (launched by founders of Peoplesoft), which has SaaS offerings for managing human resources. Several traditional software firms have countered start-ups by offering SaaS efforts of their own. IBM offers a SaaS version of its Cognos business intelligence products, Oracle offers CRM On Demand, and SAP’s Business ByDesign includes a full suite of enterprise SaaS offerings. Even Microsoft has gone SaaS, with a variety of Web-based services that include CRM, Web meeting tools, collaboration, e-mail, and calendaring. SaaS is also taking on desktop applications. Intuit has online versions of its QuickBooks, TurboTax, and Quicken finance software. Adobe has an online version of Photoshop. Google and Zoho offer office suites that compete with desktop alternatives, prompting Microsoft’s own introduction of an online version of Office. And if you store photos on Flickr or Picassa instead of your PC’s hard drive, then you’re using SaaS, too. Figure 10.4 A look at Zoho’s home page shows the diversity of both desktop and enterprise offerings from this SaaS upstart. Note that the firm makes it services available through browsers, phones, and even Facebook. • The Benefits of SaaS Firms can potentially save big using SaaS. Organizations that adopt SaaS forgo the large upfront costs of buying and installing software packages. For large enterprises, the cost to license, install, and configure products like ERP and CRM systems can easily run into the hundreds of thousands or even millions of dollars. And these costs are rarely a one time fee. Additional costs like annual maintenance contracts have also been rising as rivals fail or get bought up. Less competition among traditional firms recently allowed Oracle and SAP to raise maintenance fees to as much as 20 percent (Lacy, 2008). Firms that adopt SaaS don’t just save on software and hardware, either. There’s also the added cost for the IT staff needed to run these systems. Forrester Research estimates that SaaS can bring cost savings of 25 to 60 percent if all these costs are factored in (Quittner, 2008). There are also accounting and corporate finance implications for SaaS. Firms that adopt software as a service never actually buy a system’s software and hardware, so these systems become a variable operating expense. This flexibility helps mitigate the financial risks associated with making a large capital investment in information systems. For example, if a firm pays Salesforce.com sixty-five dollars per month per user for its CRM software, it can reduce payments during a slow season with a smaller staff, or pay more during heavy months when a firm might employ temporary workers. At these rates, SaaS not only looks good to large firms, it makes very sophisticated technology available to smaller firms that otherwise wouldn’t be able to afford expensive systems, let alone the IT staff and hardware required to run them. In addition to cost benefits, SaaS offerings also provide the advantage of being highly scalable. This feature is important because many organizations operate in environments prone to wide variance in usage. Some firms might expect systems to be particularly busy during tax time or the period around quarterly financial reporting deadlines, while others might have their heaviest system loads around a holiday season. A music label might see spikes when an artist drops a new album. Using conventional software, an organization would have to buy enough computing capacity to ensure that it could handle its heaviest anticipated workload. But sometimes these loads are difficult to predict, and if the difference between high workloads and average use is great, a lot of that expensive computer hardware will spend most of its time doing nothing. In SaaS, however, the vendor is responsible for ensuring that systems meet demand fluctuation. Vendors frequently sign a service level agreement (SLA) with their customers to ensure a guaranteed uptime and define their ability to meet demand spikes. When looking at the benefits of SaaS, also consider the potential for higher quality and service levels. SaaS firms benefit from economies of scale that not only lower software and hardware costs, but also potentially boost quality. The volume of customers and diversity of their experiences means that an established SaaS vendor is most likely an expert in dealing with all sorts of critical computing issues. SaaS firms handle backups, instantly deploy upgrades and bug fixes, and deal with the continual burden of security maintenance—all costly tasks that must be performed regularly and with care, although each offers little strategic value to firms that perform these functions themselves in-house. The breadth of a SaaS vendor’s customer base typically pushes the firm to evaluate and address new technologies as they emerge, like quickly offering accessibility from mobile platforms like the BlackBerry and iPhone. For all but the savviest of IT shops, an established SaaS vendor can likely leverage its scale and experience to provide better, cheaper, more reliable standard information systems than individual companies typically can. Software developers who choose to operate as SaaS providers also realize benefits. While a packaged software company like SAP must support multiple versions of its software to accommodate operating systems like Windows, Linux, and various flavors of Unix, an SaaS provider develops, tests, deploys, and supports just one version of the software executing on its own servers. An argument might also be made that SaaS vendors are more attuned to customer needs. Since SaaS firms run a customer’s systems on their own hardware, they have a tighter feedback loop in understanding how products are used (and why they fail)—potentially accelerating their ability to enhance their offerings. And once made, enhancements or fixes are immediately available to customers the next time they log in. SaaS applications also impact distribution costs and capacity. As much as 30 percent of the price of traditional desktop software is tied to the cost of distribution—pressing CD-ROMs, packaging them in boxes, and shipping them to retail outlets (Drummond, 2001). Going direct to consumers can cut out the middleman, so vendors can charge less or capture profits that they might otherwise share with a store or other distributor. Going direct also means that SaaS applications are available anywhere someone has an Internet connection, making them truly global applications. This feature has allowed many SaaS firms to address highly specialized markets (sometimes called vertical niches). For example, the Internet allows a company writing specialized legal software, for example, or a custom package for the pharmaceutical industry, to have a national deployment footprint from day one. Vendors of desktop applications that go SaaS benefit from this kind of distribution, too. Finally, SaaS allows a vendor to counter the vexing and costly problem of software piracy. It’s just about impossible to make an executable, illegal copy of a subscription service that runs on a SaaS provider’s hardware. Gaming in Flux: Is There a Future in Free? PC game makers are in a particularly tough spot. Development costs are growing as games become more sophisticated. But profits are plummeting as firms face rampant piracy, a growing market for used game sales, and lower sales from rental options from firms like Blockbuster and GameFly. To combat these trends, Electronic Arts (EA) has begun to experiment with a radical alternative to PC game sales—give the base version of the product away for free and make money by selling additional features. The firm started with the Korean version of its popular FIFA soccer game. Koreans are crazy for the world’s most popular sport; their nation even cohosted the World Cup in 2002. But piracy was killing EA’s sales in Korea. To combat the problem, EA created a free, online version of FIFA that let fans pay for additional features and upgrades, such as new uniforms for their virtual teams, or performance-enhancing add-ons. Each enhancement only costs about one dollar and fifty cents, but the move to a model based on these so-called microtransactions has brought in big earnings. During the first two years that the microtransaction-based Korean FIFA game was available, EA raked in roughly \$1 million a month. The two-year, twenty-four-million-dollar take was twice the sales record for EA’s original FIFA game. Asian markets have been particularly receptive to microtransactions—this revenue model makes up a whopping 50 percent of the region’s gaming revenues. But whether this model can spread to other parts of the world remains to be seen. The firm’s first free, microtransaction offering outside of Korea leverages EA’s popular Battlefield franchise. Battlefield Heroes sports lower quality, more cartoon-like graphics than EA’s conventional Battlefield offerings, but it will be offered free online. Lest someone think they can rise to the top of player rankings by buying the best military hardware for their virtual armies, EA offers a sophisticated matching engine, pitting players with similar abilities and add-ons against one another (Schenker, 2008). Players of the first versions of Battlefield Heroes and FIFA Online needed to download software to their PC. But the start-up World Golf Tour shows how increasingly sophisticated games can execute within a browser, SaaS-style. WGT doesn’t have quite the graphics sophistication of the dominant desktop golf game (EA’s Tiger Woods PGA Golf), but the free, ad-supported offering is surprisingly detailed. Buddies can meet up online for a virtual foursome, played on high-resolution representations of the world’s elite courses stitched together from fly-over photographs taken as part of game development. World Golf Tour is ad-supported. The firm hopes that advertisers will covet access to the high-income office workers likely to favor a quick virtual golf game to break up their workday. Zynga’s FarmVille, an app game for Facebook, combines both models. Free online, but offering added features purchased in micropayment-sized chunks, FarmVille made half a million dollars in three days, just by selling five-dollar virtual sweet potatoes (MacMillan, et. al., 2009). FIFA Online, Battlefield Heroes, World Golf Tour, and FarmVille all show that the conventional models of gaming software are just as much in flux as those facing business and productivity packages. Key Takeaways • SaaS firms may offer their clients several benefits including the following: • lower costs by eliminating or reducing software, hardware, maintenance, and staff expenses • financial risk mitigation since start-up costs are so low • potentially faster deployment times compared with installed packaged software or systems developed in-house • costs that are a variable operating expense rather than a large, fixed capital expense • scalable systems that make it easier for firms to ramp up during periods of unexpectedly high system use • higher quality and service levels through instantly available upgrades, vendor scale economies, and expertise gained across its entire client base • remote access and availability—most SaaS offerings are accessed through any Web browser, and often even by phone or other mobile device • Vendors of SaaS products benefit from the following: • limiting development to a single platform, instead of having to create versions for different operating systems • tighter feedback loop with clients, helping fuel innovation and responsiveness • ability to instantly deploy bug fixes and product enhancements to all users • lower distribution costs • accessibility to anyone with an Internet connection • greatly reduced risk of software piracy • Microtransactions and ad-supported gaming present alternatives to conventional purchased video games. Firms leveraging these models potentially benefit from a host of SaaS advantages, including direct-to-consumer distribution, instant upgrades, continued revenue streams rather than one-time purchase payments, and a method for combating piracy. Questions and Exercises 1. Firms that buy conventional enterprise software spend money buying software and hardware. What additional and ongoing expenses are required as part of the “care and feeding” of enterprise applications? 2. In what ways can firms using SaaS products dramatically lower costs associated with their information systems? 3. How do SaaS vendors earn their money? 4. Give examples of enterprise-focused SaaS vendors and their products. Visit the Web sites of the firms that offer these services. Which firms are listed as clients? Does there appear to be a particular type of firm that uses its services, or are client firms broadly represented? 5. Give examples of desktop-focused SaaS vendors and their products. If some of these are free, try them out and compare them to desktop alternatives you may have used. Be prepared to share your experiences with your class. 6. List the cost-related benefits to users that accrue from using SaaS. 7. List the benefits other than cost-related that accrue to users from using SaaS. 8. List the benefits realized by vendors that offer SaaS services instead of conventional software. 9. Microtransactions have been tried in many contexts, but have often failed. Can you think of contexts where microtransactions don’t work well? Are there contexts where you have paid (or would be wiling to pay) for products and services via microtransactions? What do you suppose are the major barriers to the broader acceptance of microtransactions? Do struggles have more to do with technology, consumer attitudes, or both? 10. Search online to find free and microtransaction-based games. What do you think of these efforts? What kind of gamers do these efforts appeal to? See if you can investigate whether there are examples of particularly successful offerings, or efforts that have failed. What’s the reason behind the success or failure of the efforts that you’ve investigated?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/10%3A_Software_in_Flux-_Partly_Cloudy_and_Sometimes_Free/10.07%3A_The_Software_Cloud-_Why_Buy_When_You_Can_Rent.txt
Learning Objective After studying this section you should be able to do the following: 1. Be able to list and appreciate the risks associated with SaaS. Like any technology, we also recognize there is rarely a silver bullet that solves all problems. A successful manager is able to see through industry hype and weigh the benefits of a technology against its weaknesses and limitations. And there are still several major concerns surrounding SaaS. The largest concerns involve the tremendous dependence a firm develops with its SaaS vendor. Having all of your eggs in one basket can leave a firm particularly vulnerable. If a traditional software company goes out of business, in most cases its customers can still go on using its products. But if your SaaS vendor goes under, you’re hosed. They’ve got all your data, and even if firms could get their data out, most organizations don’t have the hardware, software, staff, or expertise to quickly absorb an abandoned function. Beware with whom you partner. Any hot technology is likely to attract a lot of start-ups, and most of these start-ups are unlikely to survive. In just a single year, the leading trade association found the number of SaaS vendors dropped from seven hundred members to four hundred fifty (Drummond, 2001). One of the early efforts to collapse was Pandesic, a joint venture between SAP and Intel—two large firms that might have otherwise instilled confidence among prospective customers. In another example, Danish SaaS firm “IT Factory” was declared “Denmark’s Best IT Company 2008” by Computerworld, only to follow the award one week later with a bankruptcy declaration (Wauters, 2008). Indeed, despite the benefits, the costs of operating as a SaaS vendor can be daunting. NetSuite’s founder claimed it “takes ten years and \$100 million to do right” (Lacy, 2008) —maybe that’s why the firm still wasn’t profitable, even a year and a half after going public. Firms that buy and install packaged software usually have the option of sticking with the old stuff as long as it works, but organizations adopting SaaS may find they are forced into adopting new versions. This fact is important because any radical changes in a SaaS system’s user interface or system functionality might result in unforeseen training costs, or increase the chance that a user might make an error. Keep in mind that SaaS systems are also reliant on a network connection. If a firm’s link to the Internet goes down, its link to its SaaS vendor is also severed. Relying on an Internet connection also means that data is transferred to and from a SaaS firm at Internet speeds, rather the potentially higher speeds of a firm’s internal network. Solutions to many of these issues are evolving as Internet speeds become faster and Internet service providers become more reliable. There are also several programs that allow for offline use of data that is typically stored in SaaS systems, including Google Gears and Adobe AIR. With these products a user can download a subset of data to be offline (say on a plane flight or other inaccessible location), and then sync the data when the connection is restored. Ultimately, though, SaaS users have a much higher level of dependence on their Internet connections. And although a SaaS firm may have more security expertise than your organization, that doesn’t mean that security issues can be ignored. Any time a firm allows employees to access a corporation’s systems and data assets from a remote location, a firm is potentially vulnerable to abuse and infiltration. Some firms may simply be unacceptably uncomfortable with critical data assets existing outside their own network. There may also be contractual or legal issues preventing data from being housed remotely, especially if a SaaS vendor’s systems are in another country operating under different laws and regulations. “We’re very bound by regulators in terms of client data and country-of-origin issues, so it’s very difficult to use the cloud,” says Rupert Brown, a chief architect at Merrill Lynch (Gruman, 2008). SaaS systems are often accused of being less flexible than their installed software counterparts—mostly due to the more robust configuration and programming options available in traditional software packages. It is true that many SaaS vendors have improved system customization options and integration with standard software packages. And at times a lack of complexity can be a blessing—fewer choices can mean less training, faster start-up time, and lower costs associated with system use. But firms with unique needs may find SaaS restrictive. SaaS offerings usually work well when the bulk of computing happens at the server end of a distributed system because the kind of user interface you can create in a browser isn’t as sophisticated as what you can do with a separate, custom-developed desktop program. A comparison of the first few iterations of the Web-based Google office suite, which offers word processing, presentation software, and a spreadsheet, reveals a much more limited feature set than Microsoft’s Office desktop software. The bonus, of course, is that an online office suite is accessible anywhere and makes sharing documents a snap. Again, an understanding of trade-offs is key. Here’s another challenge for a firm and its IT staff: SaaS means a greater consumerization of technology. Employees, at their own initiative, can go to Socialtext or Google Sites and set up a wiki, WordPress to start blogging, or subscribe to a SaaS offering like Salesforce.com, all without corporate oversight and approval. This work can result in employees operating outside established firm guidelines and procedures, potentially introducing operational inconsistencies or even legal and security concerns. The consumerization of corporate technology isn’t all bad. Employee creativity can blossom with increased access to new technologies, costs might be lower than home grown solutions, and staff could introduce the firm to new tools that might not otherwise be on the radar of the firm’s IS Department. But all this creates an environment that requires a level of engagement between a firm’s technical staff and the groups that it serves that is deeper than that employed by any prior generation of technology workers. Those working in an organization’s information systems group must be sure to conduct regular meetings with representative groups of employees across the firm to understand their pain points and assess their changing technology needs. Non-IT managers should regularly reach out to IT to ensure that their needs are on the tech staff’s agenda. Organizations with internal IT-staff R&D functions that scan new technologies and critically examine their relevance and potential impact on the firm can help guide an organization through the promise and peril of new technologies. Now more than ever, IT managers must be deeply knowledgeable about business areas, broadly aware of new technologies, and able to bridge the tech and business worlds. Similarly, any manager looking to advance his or her organization has to regularly consider the impact of new technologies. Key Takeaways The risks associated with SaaS include the following: • dependence on a single vendor. • concern about the long-term viability of partner firms. • users may be forced to migrate to new versions—possibly incurring unforeseen training costs and shifts in operating procedures. • reliance on a network connection—which may be slower, less stable, and less secure. • data asset stored off-site—with the potential for security and legal concerns. • limited configuration, customization, and system integration options compared to packaged software or alternatives developed in-house. • the user interface of Web-based software is often less sophisticated and lacks the richness of most desktop alternatives. • ease of adoption may lead to pockets of unauthorized IT being used throughout an organization. Questions and Exercises 1. Consider the following two firms: a consulting start-up, and a defense contractor. Leverage what you know about SaaS and advise whether each might consider SaaS efforts for CRM or other enterprise functions? Why or why not? 2. Think of firms you’ve worked for, or firms you would like to work for. Do SaaS offerings make sense for these firms? Make a case for or against using certain categories of SaaS. 3. What factors would you consider when evaluating a SaaS vendor? Which firms are more appealing to you and why? 4. Discuss problems that may arise because SaaS solutions rely on Internet connections. Discuss the advantages of through-the-browser access. 5. Evaluate trial versions of desktop SaaS offerings (offered by Adobe, Google, Microsoft, Zoho, or others). Do you agree that the interfaces of Web-based versions are not as robust as desktop rivals? Are they good enough for you? For most users?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/10%3A_Software_in_Flux-_Partly_Cloudy_and_Sometimes_Free/10.08%3A_SaaS-_Not_without_Risks.txt
Learning Objectives After studying this section you should be able to do the following: 1. Distinguish between SaaS and hardware clouds. 2. Provide examples of firms and uses of hardware clouds. 3. Understand the concepts of cloud computing, cloudbursting, and black swan events. 4. Understand the challenges and economics involved in shifting computing hardware to the cloud. While SaaS provides the software and hardware to replace an internal information system, sometimes a firm develops its own custom software but wants to pay someone else to run it for them. That’s where hardware clouds, utility computing, and related technologies come in. In this model, a firm replaces computing hardware that it might otherwise run on-site with a service provided by a third party online. While the term utility computing was fashionable a few years back (and old timers claim it shares a lineage with terms like hosted computing or even time sharing), now most in the industry have begun referring to this as an aspect of cloud computing, often referred to as hardware clouds. Computing hardware used in this scenario exists “in the cloud,” meaning somewhere on the Internet. The costs of systems operated in this manner look more like a utility bill—you only pay for the amount of processing, storage, and telecommunications used. Tech research firm Gartner has estimated that 80 percent of corporate tech spending goes toward data center maintenance (Rayport, 2008). Hardware-focused cloud computing provides a way for firms to chip away at these costs. Major players are spending billions building out huge data centers to take all kinds of computing out of the corporate data center and place it in the cloud. Efforts include Sun’s Network.com grid, IBM’s Cloud Labs, Amazon’s EC2 (Elastic Computing Cloud), Google’s App Engine, Microsoft’s Azure, and Salesforce.com’s Force.com. While cloud vendors typically host your software on their systems, many of these vendors also offer additional tools to help in creating and hosting apps in the cloud. Salesforce.com offers Force.com, which includes not only a hardware cloud but also several cloud-supporting tools, including a programming environment (IDE) to write applications specifically tailored for Web-based delivery. Google’s App Engine offers developers a database product called Big Table, while Amazon offers one called Amazon DB. Traditional software firms like Oracle are also making their products available to developers through various cloud initiatives. Still other cloud computing efforts focus on providing a virtual replacement for operational hardware like storage and backup solutions. These include the cloud-based backup efforts like EMC’s Mozy, and corporate storage services like Amazon’s Simple Storage Solution (S3). Even efforts like Apple’s MobileMe and Microsoft’s Live Mesh that sync user data across devices (phone, multiple desktops) are considered part of the cloud craze. The common theme in all of this is leveraging computing delivered over the Internet to satisfy the computing needs of both users and organizations. Clouds in Action: A Snapshot of Diverse Efforts Large, established organizations, small firms and start-ups are all embracing the cloud. The examples below illustrate the wide range of these efforts. Journalists refer to the New York Times as, “The Old Gray Lady,” but it turns out that the venerable paper is a cloud-pioneering whippersnapper. When the Times decided to make roughly one hundred fifty years of newspaper archives (over fifteen million articles) available over the Internet, it realized that the process of converting scans into searchable PDFs would require more computing power than the firm had available (Rayport, 2008). To solve the challenge, a Times IT staffer simply broke out a credit card and signed up for Amazon’s EC2 cloud computing and S3 cloud storage services. The Times then started uploading terabytes of information to Amazon, along with a chunk of code to execute the conversion. While anyone can sign up for services online without speaking to a rep, someone from Amazon eventually contacted the Times to check in after noticing the massive volume of data coming into its systems. Using one hundred of Amazon’s Linux servers, the Times job took just twenty-four hours to complete. In fact, a coding error in the initial batch forced the paper to rerun the job. Even the blunder was cheap—just two hundred forty dollars in extra processing costs. Says a member of the Times IT group: “It would have taken a month at our facilities, since we only had a few spare PCs.…It was cheap experimentation, and the learning curve isn’t steep” (Gruman, 2008). NASDAQ also uses Amazon’s cloud as part of its Market Replay system. The exchange uses Amazon to make terabytes of data available on demand, and uploads an additional thirty to eighty gigabytes every day. Market Reply allows access through an Adobe AIR interface to pull together historical market conditions in the ten-minute period surrounding a trade’s execution. This allows NASDAQ to produce a snapshot of information for regulators or customers who question a trade. Says the exchange’s VP of Product Development, “The fact that we’re able to keep so much data online indefinitely means the brokers can quickly answer a question without having to pull data out of old tapes and CD backups” (Grossman, 2009). NASDAQ isn’t the only major financial organization leveraging someone else’s cloud. Others include Merrill Lynch, which uses IBM’s Blue Cloud servers to build and evaluate risk analysis programs; and Morgan Stanley, which relies on Force.com for recruiting applications. The Network.com offering from Sun Microsystems is essentially a grid computer in the clouds (see Chapter 5 “Moore’s Law: Fast, Cheap Computing and What It Means for the Manager”). Since grid computers break a task up to spread across multiple processors, the Sun service is best for problems that can be easily divided into smaller mini jobs that can be processed simultaneously by the army of processors in Sun’s grid. The firm’s cloud is particularly useful for performing large-scale image and data tasks. Infosolve, a data management firm, uses the Sun cloud to scrub massive data sets, at times harnessing thousands of processors to comb through client records and correct inconsistent entries. IBM Cloud Labs, which counts Elizabeth Arden and the U.S. Golf Association among its customers, offers several services, including so-called cloudbursting. In a cloudbursting scenario a firm’s data center running at maximum capacity can seamlessly shift part of the workload to IBM’s cloud, with any spikes in system use metered, utility style. Cloudbursting is appealing because forecasting demand is difficult and can’t account for the ultrarare, high-impact events, sometimes called black swans. Planning to account for usage spikes explains why the servers at many conventional corporate IS shops run at only 10 to 20 percent capacity (Parkinson, 2007). While Cloud Labs cloudbursting service is particularly appealing for firms that already have a heavy reliance on IBM hardware in-house, it is possible to build these systems using the hardware clouds of other vendors, too. Salesforce.com’s Force.com cloud is especially tuned to help firms create and deploy custom Web applications. The firm makes it possible to piece together projects using premade Web services that provide software building blocks for features like calendaring and scheduling. The integration with the firm’s SaaS CRM effort, and with third-party products like Google Maps allows enterprise mash-ups that can combine services from different vendors into a single application that’s run on Force.com hardware. The platform even includes tools to help deploy Facebook applications. Intuitive Surgical used Force.com to create and host a custom application to gather clinical trial data for the firm’s surgical robots. An IS manager at Intuitive noted, “We could build it using just their tools, so in essence, there was no programming” (Gruman, 2008). Other users include Jobscience, which used Force.com to launch its online recruiting site; and Harrah’s Entertainment, which uses Force.com applications to manage room reservations, air travel programs, and player relations. These efforts compete with a host of other initiatives, including Google’s App Engine and Microsoft’s Azure Services Platform, hosting firms like Rackspace, and cloud-specific upstarts like GoGrid. • Challenges Remain Hardware clouds and SaaS share similar benefits and risk, and as our discussion of SaaS showed, cloud efforts aren’t for everyone. Some additional examples illustrate the challenges in shifting computing hardware to the cloud. For all the hype about cloud computing, it doesn’t work in all situations. From an architectural standpoint, most large organizations run a hodgepodge of systems that include both package applications and custom code written in-house. Installing a complex set of systems on someone else’s hardware can be a brutal challenge and in many cases is just about impossible. For that reason we can expect most cloud computing efforts to focus on new software development projects rather than options for old software. Even for efforts that can be custom-built and cloud-deployed, other roadblocks remain. For example, some firms face stringent regulatory compliance issues. To quote one tech industry executive, “How do you demonstrate what you are doing is in compliance when it is done outside?” (Gruman, 2008) Firms considering cloud computing need to do a thorough financial analysis, comparing the capital and other costs of owning and operating their own systems over time against the variable costs over the same period for moving portions to the cloud. For high-volume, low-maintenance systems, the numbers may show that it makes sense to buy rather than rent. Cloud costs can seem super cheap at first. Sun’s early cloud effort offered a flat fee of one dollar per CPU per hour. Amazon’s cloud storage rates were twenty-five cents per gigabyte per month. But users often also pay for the number of accesses and the number of data transfers (Preimesberger, 2008). A quarter a gigabyte a month may seem like a small amount, but system maintenance costs often include the need to clean up old files or put them on tape. If unlimited data is stored in the cloud, these costs can add up. Firms should enter the cloud cautiously, particularly where mission-critical systems are concerned. When one of the three centers supporting Amazon’s cloud briefly went dark in 2008, start-ups relying on the service, including Twitter and SmugMug, reported outages. Apple’s MobileMe cloud-based product for synchronizing data across computers and mobile devices, struggled for months after its introduction when the cloud repeatedly went down. Vendors with multiple data centers that are able to operate with fault-tolerant provisioning, keeping a firm’s efforts at more than one location to account for any operating interruptions, will appeal to firms with stricter uptime requirements. Key Takeaways • It’s estimated that 80 percent of corporate tech spending goes toward data center maintenance. Hardware-focused cloud computing initiatives from third party firms help tackle this cost by allowing firms to run their own software on the hardware of the provider. • Amazon, EMC, Google, IBM, Microsoft, Oracle/Sun, Rackspace, and Salesforce.com are among firms offering platforms to run custom software projects. Some offer additional tools and services, including additional support for cloud-based software development, hosting, application integration, and backup. • Users of cloud computing run the gamut of industries, including publishing (the New York Times), finance (NASDAQ), and cosmetics and skin care (Elizabeth Arden). • Benefits and risks are similar to those discussed in SaaS efforts. Benefits include the use of the cloud for handling large batch jobs or limited-time tasks, offloading expensive computing tasks, and cloudbursting efforts that handle system overflow when an organization needs more capacity. • Most legacy systems can’t be easily migrated to the cloud, meaning most efforts will be new efforts or those launched by younger firms. • Cloud (utility) computing doesn’t work in situations where complex legacy systems have to be ported, or where there may be regulatory compliance issues. • Some firms may still find TCO and pricing economics favor buying over renting—scale sometimes suggests an organization is better off keeping efforts in-house. Questions and Exercises 1. What are hardware clouds? What kinds of services are described by this terms? What are other names for this phenomenon? How does this differ from SaaS? 2. Which firms are the leading providers of hardware clouds? How are clients using these efforts? 3. List the circumstances where hardware clouds work best and where it works poorly. 4. Research cloud-based alternatives for backing up your hard drive. Which are among the best reviewed product or services? Why? Do you or would you use such a service? Why or why not? 5. Can you think of “black swan” events that have caused computing services to become less reliable? Describe the events and its consequences for computing services. Suggest a method and vendor for helping firms overcome the sorts of events that you encountered.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/10%3A_Software_in_Flux-_Partly_Cloudy_and_Sometimes_Free/10.09%3A_The_Hardware_Cloud-_Utility_Computing_and_Its_Cousins.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand how cloud computing’s impact across industries is proving to be broad and significant. 2. Know the effects of cloud computing on high-end server sales and the influence on the trend shifting from hardware sales to service. 3. Know the effects of cloud computing on innovation and the influence on the changes in the desired skills mix and job outlook for IS workers. 4. Know that by lowering the cost to access powerful systems and software, cloud computing can decrease barriers to entry. 5. Understand the importance, size, and metrics of server farms. Although still a relatively recent phenomenon, cloud computing’s impact across industries is already proving to be broad and significant. Cloud computing is affecting the competitive dynamics of the hardware, software, and consulting industries. In the past, firms seeking to increase computing capacity invested heavily in expensive, high margin server hardware, creating a huge market for computer manufacturers. But now hardware firms find these markets may be threatened by the cloud. The trend shifting from hardware to services is evident in IBM’s quarterly numbers. The firm recently reported its overall earnings were up 12 percent, even though hardware sales were off by 20 percent (Fortt, 2009). What made up the difference? The growth of Big Blue’s services business. IBM is particularly well positioned to take advantage of the shift to services because it employs more technology consultants than any other firm in the world, while most of its competitors are forced to partner to offer something comparable. Consulting firm Capgemini’s partnership to offer cloud services through Amazon is one such example. The shift to cloud computing also alters the margin structure for many in the computing industry. While Moore’s Law has made servers cheap, deploying SaaS and operating a commercial cloud is still very expensive—much more so than simply making additional copies of conventional, packaged software. Microsoft surprised Wall Street when it announced it would need to pour at least \$2 billion more than analysts expected into the year’s server farm capital spending. The firm’s stock—among the world’s most widely held—sank 11 percent in a day (Mehta, 2006). As a result, many portfolio managers started paying closer attention to the business implications of the cloud. Cloud computing can accelerate innovation and therefore changes the desired skills mix and job outlook for IS workers. If cloud computing customers spend less on expensive infrastructure investments, they potentially have more money to reinvest in strategic efforts and innovation. IT careers may change, too. Demand for nonstrategic skills like hardware operations and maintenance are likely to decrease. Organizations will need more business-focused technologists who intimately understand a firm’s competitive environment, and can create systems that add value and differentiate the firm from its competition (Fortt, 2009). While these tech jobs require more business training, they’re also likely to be more durable and less likely to be outsourced to a third party with a limited understanding of the firm. By lowering the cost to access powerful systems and software, barriers to entry also decrease. Firms need to think about the strategic advantages they can create, even as technology is easily duplicated. This trend means the potential for more new entrants across industries, and since start-ups can do more with less, it’s also influencing entrepreneurship and venture capital. The CTO of SlideShare, a start-up that launched using Amazon’s S3 storage cloud, offers a presentation on his firm’s site labeled “Using S3 to Avoid VC.” Similarly, the CEO of online payments start-up Zuora claims to have saved between half a million and \$1 million by using cloud computing: “We have no servers, we run the entire business in the cloud” (Ackerman, 2008). And the sophistication of these tools lowers development time. Enterprise firm Apttus claims it was able to perform the equivalent of six months of development in a couple of weekends by using cloud services. The firm scored its first million-dollar deal in three months, and was break-even in nine months, a ramp-up time that would have been unheard of, had they needed to plan, purchase, and deploy their own data center, and create from scratch the Web services that were provided by its cloud vendor (Rapyort, 2008). So What’s It Take to Run This Thing? In the countryside surrounding the Columbia River in the Pacific Northwest, potato farms are yielding to server farms. Turns out the area is tailor made for creating the kinds of massive data installations that form the building blocks of cloud computing. The land is cheap, the region’s hydroelectric power costs a fraction of Silicon Valley rates, and the area is served by ultrafast fiber-optic connections. Even the area’s mild temperatures cut cooling costs. Most major players in cloud computing have server farms in the region, each with thousands of processors humming away simultaneously. Microsoft’s Quincy, Washington, facility is as big as ten American football fields and has nearly six hundred miles of wiring, 1.5 metric tons of battery backup, and three miles of chiller piping to keep things cool. Storage is big enough to store 6.75 trillion photos. Just a short drive away, Yahoo has two facilities on fifty acres, including one that runs at a zero carbon footprint. Google has a thirty-acre site sprawled across former farmland in The Dalles, Oregon. The Google site includes two massive buildings, with a third on the way. And in Boardman, Oregon, Amazon has a three building petabyte palace that sports its own ten-megawatt electrical substation (Katz, 2009). While U.S. activity has been particularly intense in the Pacific Northwest, server farms that support cloud computing are popping up from Shanghai to São Paulo. Not only does a diverse infrastructure offer a degree of fault tolerance and disaster recovery (Oregon down? Shift to North Carolina), the myriad of national laws and industry-specific regulatory environments may require some firms to keep data within a specific country or region. To meet the challenge, cloud vendors are racing to deploy infrastructure worldwide and allowing customers to select regional availability zones for their cloud computing needs. The build-out race has become so intense that many firms have developed rapid-deployment server farm modules that are preconfigured and packed inside shipping containers. Some of these units contain as many as three thousand servers each. Just drop the containers on-site, link to power, water, and telecom, and presto—you’ve got yourself a data center. More than two hundred containers can be used on a single site. One Microsoft VP claimed the configuration has cut the time to open a data center to just a few days, claiming Microsoft’s San Antonio facility was operational in less time than it took a local western wear firm to deliver her custom-made cowboy boots (Burrows, 2008)! Microsoft’s Dublin-based fourth generation data center will be built entirely of containers—no walls or roof—using the outside air for much of the cooling (Vanderbilt, 2009). While firms are buying less hardware, cloud vendors have turned out to be the computing industry’s best customers. Amazon has spent well over \$2 billion on its cloud infrastructure. Google reportedly has 1.4 million servers operating across three dozen data centers (Katz, 2009). Demonstrating it won’t be outdone, Microsoft plans to build as many as twenty server farms, at costs of up to \$1 billion each (Burrows, 2008). Look for the clouds to pop up in unexpected places. Microsoft has scouted locations in Siberia, while Google has applied to patent a method for floating data centers on an offshore platform powered by wave motions (Katz, 2009). Key Takeaways • Cloud computing’s impact across industries is proving to be broad and significant. • Clouds can lower barriers to entry in an industry, making it easier for start-ups to launch and smaller firms to leverage the backing of powerful technology. • Clouds may also lower the amount of capital a firm needs to launch a business, shifting power away from venture firms in those industries that had previously needed more VC money. • Clouds can shift resources out of capital spending and into profitability and innovation. • Hardware and software sales may drop as cloud use increases, while service revenues will increase. • Cloud computing can accelerate innovation and therefore changes the desired skills mix and job outlook for IS workers. Tech skills in data center operations, support, and maintenance may shrink as a smaller number of vendors consolidate these functions. • Demand continues to spike for business-savvy technologists. Tech managers will need even stronger business skills and will focus an increasing percentage of their time on strategic efforts. These latter jobs are tougher to outsource, since they involve an intimate knowledge of the firm, its industry, and its operations. • The market for expensive, high margin, sever hardware is threatened by companies moving applications to the cloud instead of investing in hardware. • Server farms require plenty of cheap land, low cost power, ultrafast fiber-optic connections, and benefit from mild climates. • Sun, Microsoft, IBM, and HP have all developed rapid-deployment server farm modules that are pre configured and packed inside shipping containers. Questions and Exercises 1. Describe the change in IBM’s revenue stream resulting from the shift to the cloud. 2. Why is IBM particularly well positioned to take advantage of the shift to services? 3. Describe the shift in skill sets required for IT workers that is likely to result from the widespread adoption of cloud computing. 4. Why do certain entry barriers decrease as a result of cloud computing? What is the effect of lower entry barriers on new entrants, entrepreneurship, and venture capital? On existing competitors? 5. What factors make the Columbia River region of the Pacific Northwest an ideal location for server farms? 6. What is the estimated number of computers operated by Google? 7. Why did Microsoft’s shift to cloud computing create an unexpected shock among stock analysts who cover the firm? What does this tell you about the importance of technology understanding among finance and investment professionals? 8. Why do cloud computing vendors build regional server farms instead of one mega site? 9. Why would a firm build a container-based data center?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/10%3A_Software_in_Flux-_Partly_Cloudy_and_Sometimes_Free/10.10%3A_Clouds_and_Tech_Industry_Impact.txt
Learning Objectives After studying this section you should be able to do the following: 1. Know what virtualization software is and its impact on cloud computing. 2. Be able to list the benefits to a firm from using virtualization. The reduced costs and increased power of commodity hardware are not the only contributors to the explosion of cloud computing. The availability of increasingly sophisticated software tools has also had an impact. Perhaps the most important software tool in the cloud computing toolbox is virtualization. Think of virtualization as being a kind of operating system for operating systems. A server running virtualization software can create smaller compartments in memory that each behave as a separate computer with its own operating system and resources. The most sophisticated of these tools also allow firms to combine servers into a huge pool of computing resources that can be allocated as needed (Lyons, 2008). Virtualization can generate huge savings. Some studies have shown that on average, conventional data centers run at 15 percent or less of their maximum capacity. Data centers using virtualization software have increased utilization to 80 percent or more (Katz, 2009).This increased efficiency means cost savings in hardware, staff, and real estate. Plus it reduces a firm’s IT-based energy consumption, cutting costs, lowering its carbon footprint, and boosting “green cred” (Castro, 2007). Using virtualization, firms can buy and maintain fewer servers, each running at a greater capacity. It can also power down servers until demand increases require them to come online. While virtualization is a key software building block that makes public cloud computing happen, it can also be used in-house to reduce an organization’s hardware needs, and even to create a firm’s own private cloud of scalable assets. Bechtel, BT, Merrill Lynch, and Morgan Stanley are among the firms with large private clouds enabled by virtualization (Brodkin, 2008). Another kind of virtualization, virtual desktops allow a server to run what amounts to a copy of a PC—OS, applications, and all—and simply deliver an image of what’s executing to a PC or other connected device. This allows firms to scale, back up, secure, and upgrade systems far more easily than if they had to maintain each individual PC. One game start-up hopes to remove the high-powered game console hardware attached to your television and instead put the console in the cloud, delivering games to your TV as they execute remotely on superfast server hardware. Virtualization can even live on your desktop. Anyone who’s ever run Windows in a window on Mac OS X is using virtualization software; these tools inhabit a chunk of your Mac’s memory for running Windows and actually fool this foreign OS into thinking that it’s on a PC. Interest in virtualization has exploded in recent years. VMware, the virtualization software division of storage firm EMC, was the biggest IPO of 2007. But its niche is getting crowded. Microsoft has entered the market, building virtualization into its server offerings. Dell bought a virtualization software firm for \$1.54 billion. And there’s even an open source virtualization product called Xen (Castro, 2007). Key Takeaways • Virtualization software allows one computing device to function as many. The most sophisticated products also make it easy for organizations to scale computing requirements across several servers. • Virtualization software can lower a firm’s hardware needs, save energy, and boost scalability. • Data center virtualization software is at the heart of many so-called private clouds and scalable corporate data centers, as well as the sorts of public efforts described earlier. • Virtualization also works on the desktop, allowing multiple operating systems (Mac OS X, Linux, Windows) to run simultaneously on the same platform. • Virtualization software can increase data center utilization to 80 percent or more. • While virtualization is used to make public cloud computing happen, it can also be used in-house to create a firm’s own private cloud. • A number of companies, including Microsoft and Dell, have entered the growing virtualization market. Questions and Exercises 1. List the benefits to a firm from using virtualization. 2. What is the average utilization rate for conventional data centers? 3. List companies that have virtualization-enabled private clouds. 4. Give an example of desktop virtualization. 5. Name three companies that are players in the virtualization software industry.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/10%3A_Software_in_Flux-_Partly_Cloudy_and_Sometimes_Free/10.11%3A_Virtualization-_Software_That_Makes_One_Computer_Act_Like_Many.txt
Learning Objectives After studying this section you should be able to do the following: 1. Know the options managers have when determining how to satisfy the software needs of their companies. 2. Know the factors that must be considered when making the make, buy, or rent decision. So now you realize managers have a whole host of options when seeking to fulfill the software needs of their firms. An organization can purchase packaged software from a vendor, use open source offerings, leverage SaaS or other type of cloud computing, outsource development or other IT functions to another firm either domestically or abroad, or a firm can develop all or part of the effort themselves. When presented with all of these options, making decisions about technologies and systems can seem pretty daunting. First, realize that that for most firms, technology decisions are not binary options for the whole organization in all situations. Few businesses will opt for an IT configuration that is 100 percent in-house, packaged, or SaaS. Being aware of the parameters to consider can help a firm make better, more informed decisions. It’s also important to keep in mind that these decisions need to be continuously reevaluated as markets and business needs change. What follows is a summary of some of the key variables to consider. Competitive AdvantageDo we rely on unique processes, procedures, or technologies that create vital, differentiating competitive advantage? If so, then these functions aren’t a good candidate to outsource or replace with a package software offering. Amazon.com had originally used recommendation software provided by a third party, and Netflix and Dell both considered third-party software to manage inventory fulfillment. But in all three cases, these firms felt that mastery of these functions was too critical to competitive advantage, so each firm developed proprietary systems unique to the circumstances of each firm. SecurityAre there unacceptable risks associated with using the packaged software, OSS, cloud solution, or an outsourcing vendor? Are we convinced that the prospective solution is sufficiently secure and reliable? Can we trust the prospective vendor with our code, our data, our procedures and our way of doing business? Are there noncompete provisions for vendor staff that may be privy to our secrets? For off-site work, are there sufficient policies in place for on-site auditing? If the answers to any of these questions is no, outsourcing might not be a viable option. Legal and ComplianceIs our firm prohibited outright from using technologies? Are there specific legal and compliance requirements related to deploying our products or services? Even a technology as innocuous as instant messaging may need to be deployed in such a way that it complies with laws requiring firms to record and reproduce the electronic equivalent of a paper trail. For example, SEC Rule 17a-4 requires broker dealers to retain client communications for a minimum of three years. HIPAA laws governing health care providers state that electronic communications must also be captured and stored (Shapiro, 2004). While tech has gained a seat in the board room, legal also deserves a seat in systems planning meetings. Skill, Expertise, and Available LaborCan we build it? The firm may have skilled technologists, but they may not be sufficiently experienced with a new technology. Even if they are skilled, managers much consider the costs of allocating staff away from existing projects for this effort. CostIs this a cost-effective choice for our firm? A host of factors must be considered when evaluating the cost of an IT decision. The costs to build, host, maintain, and support an ongoing effort involve labor (software development, quality assurance, ongoing support, training, and maintenance), consulting, security, operations, licensing, energy, and real estate. Any analysis of costs should consider not only the aggregate spending required over the lifetime of the effort but also whether these factors might vary over time. TimeDo we have time to build, test, and deploy the system? Vendor IssuesIs the vendor reputable and in a sound financial position? Can the vendor guarantee the service levels and reliability we need? What provisions are in place in case the vendor fails or is acquired? Is the vendor certified via the Carnegie Mellon Software Institute or other standards organizations in a way that conveys quality, trust, and reliability? The list above is a starter. It should also be clear that these metrics are sometimes quite tough to estimate. Welcome to the challenges of being a manager! At times an environment in flux can make an executive feel like he or she is working on a surfboard, constantly being buffeted about by unexpected currents and waves. Hopefully the issues outlined in this chapter will give you the surfing skills you need for a safe ride that avoids the organizational equivalent of a wipeout. Key Takeaways • The make, buy, or rent decision may apply on a case-by-case basis that might be evaluated by firm, division, project or project component. Firm and industry dynamics may change in a way that causes firms to reassess earlier decisions, or to alter the direction of new initiatives. • Factors that managers should consider when making a make, buy, or rent decision include the following: competitive advantage, security, legal and compliance issues, the organization’s skill and available labor, cost, time, and vendor issues. • Factors must be evaluated over the lifetime of a project, not at a single point in time. • Managers have numerous options available when determining how to satisfy the software needs of their companies: purchase packaged software from a vendor, use OSS, use SaaS or utility computing, outsourcing development, or developing all or part of the effort themselves. • If a company relies on unique processes, procedures, or technologies that create vital, differentiating, competitive advantages, the functions probably aren’t a good candidate to outsource. Questions and Exercises 1. What are the options available to managers when seeking to meet the software needs of their companies? 2. What are the factors that must be considered when making the make, buy, or rent decision? 3. What are some security-related questions that must be asked when making the make, buy, or rent decision? 4. What are some vendor-related questions that must be asked when making the make, buy, or rent decision? 5. What are some of the factors that must be considered when evaluating the cost of an IT decision? 6. Why must factors be evaluated over the lifetime of a project, not at a single point in time?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/10%3A_Software_in_Flux-_Partly_Cloudy_and_Sometimes_Free/10.12%3A_Make_Buy_or_Rent.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand how increasingly standardized data, access to third-party data sets, cheap, fast computing and easier-to-use software are collectively enabling a new age of decision making. 2. Be familiar with some of the enterprises that have benefited from data-driven, fact-based decision making. The planet is awash in data. Cash registers ring up transactions worldwide. Web browsers leave a trail of cookie crumbs nearly everywhere they go. And with radio frequency identification (RFID), inventory can literally announce its presence so that firms can precisely journal every hop their products make along the value chain: “I’m arriving in the warehouse,” “I’m on the store shelf,” “I’m leaving out the front door.” A study by Gartner Research claims that the amount of data on corporate hard drives doubles every six months (Babcock, 2006), while IDC states that the collective number of those bits already exceeds the number of stars in the universe (Mearian, 2008). Wal-Mart alone boasts a data volume well over 125 times as large as the entire print collection of the U.S. Library of Congress1. And with this flood of data comes a tidal wave of opportunity. Increasingly standardized corporate data, and access to rich, third-party data sets—all leveraged by cheap, fast computing and easier-to-use software—are collectively enabling a new age of data-driven, fact-based decision making. You’re less likely to hear old-school terms like “decision support systems” used to describe what’s going on here. The phrase of the day is business intelligence (BI), a catchall term combining aspects of reporting, data exploration and ad hoc queries, and sophisticated data modeling and analysis. Alongside business intelligence in the new managerial lexicon is the phrase analytics, a term describing the extensive use of data, statistical and quantitative analysis, explanatory and predictive models, and fact-based management to drive decisions and actions (Davenport & Harris, 2007). The benefits of all this data and number crunching are very real, indeed. Data leverage lies at the center of competitive advantage we’ve studied in the Zara, Netflix, and Google cases. Data mastery has helped vault Wal-Mart to the top of the Fortune 500 list. It helped Harrah’s Casino Hotels grow to be twice as profitable as similarly sized Caesars, and rich enough to acquire this rival. And data helped Capital One find valuable customers that competitors were ignoring, delivering ten-year financial performance a full ten times greater than the S&P 500. Data-driven decision making is even credited with helping the Red Sox win their first World Series in eighty-three years and with helping the New England Patriots win three Super Bowls in four years. To quote from a BusinessWeek cover story on analytics, “Math Will Rock Your World!” (Baker, 2006) Sounds great, but it can be a tough slog getting an organization to the point where it has a leveragable data asset. In many organizations data lies dormant, spread across inconsistent formats and incompatible systems, unable to be turned into anything of value. Many firms have been shocked at the amount of work and complexity required to pull together an infrastructure that empowers its managers. But not only can this be done; it must be done. Firms that are basing decisions on hunches aren’t managing; they’re gambling. And the days of uninformed managerial dice rolling are over. While we’ll study technology in this chapter, our focus isn’t as much on the technology itself as it is on what you can do with that technology. Consumer products giant P&G believes in this distinction so thoroughly that the firm renamed its IT function as “Information and Decision Solutions” (Soat, 2007). Solutions drive technology decisions, not the other way around. In this chapter we’ll study the data asset, how it’s created, how it’s stored, and how it’s accessed and leveraged. We’ll also study many of the firms mentioned above, and more; providing a context for understanding how managers are leveraging data to create winning models, and how those that have failed to realize the power of data have been left in the dust. Data, Analytics, and Competitive Advantage Anyone can acquire technology—but data is oftentimes considered a defensible source of competitive advantage. The data a firm can leverage is a true strategic asset when it’s rare, valuable, imperfectly imitable, and lacking in substitutes (see Chapter 2 “Strategy and Technology: Concepts and Frameworks for Understanding What Separates Winners from Losers”). If more data brings more accurate modeling, moving early to capture this rare asset can be the difference between a dominating firm and an also-ran. But be forewarned, there’s no monopoly on math. Advantages based on capabilities and data that others can acquire will be short-lived. Those advances leveraged by the Red Sox were originally pioneered by the Oakland A’s and are now used by nearly every team in the major leagues. This doesn’t mean that firms can ignore the importance data can play in lowering costs, increasing customer service, and other ways that boost performance. But differentiation will be key in distinguishing operationally effective data use from those efforts that can yield true strategic positioning. Key Takeaways • The amount of data on corporate hard drives doubles every six months. • In many organizations, available data is not exploited to advantage. • Data is oftentimes considered a defensible source of competitive advantage; however, advantages based on capabilities and data that others can acquire will be short-lived. Questions and Exercises 1. Name and define the terms that are supplanting discussions of decision support systems in the modern IS lexicon. 2. Is data a source of competitive advantage? Describe situations in which data might be a source for sustainable competitive advantage. When might data not yield sustainable advantage? 3. Are advantages based on analytics and modeling potentially sustainable? Why or why not? 4. What role do technology and timing play in realizing advantages from the data asset? 1Derived by comparing Wal-Mart’s 2.5 petabytes (E. Lai, “Teradata Creates Elite Club for Petabyte-Plus Data Warehouse Customers,” Computerworld, October 18, 2008) to the Library of Congress estimate of 20 TB (D. Gewirtz, “What If Someone Stole the Library of Congress?” CNN.com/AC360, May 25, 2009). It’s further noted that the Wal-Mart figure is just for data stored on systems provided by the vendor Teradata. Wal-Mart has many systems outside its Teradata-sourced warehouses, too.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/11%3A_The_Data_Asset-_Databases_Business_Intelligence_and_Competitive_Advantage/11.01%3A_Introduction.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand the difference between data and information. 2. Know the key terms and technologies associated with data organization and management. Data refers simply to raw facts and figures. Alone it tells you nothing. The real goal is to turn data into information. Data becomes information when it’s presented in a context so that it can answer a question or support decision making. And it’s when this information can be combined with a manager’s knowledge—their insight from experience and expertise—that stronger decisions can be made. Trusting Your Data The ability to look critically at data and assess its validity is a vital managerial skill. When decision makers are presented with wrong data, the results can be disastrous. And these problems can get amplified if bad data is fed to automated systems. As an example, look at the series of man-made and computer-triggered events that brought about a billion-dollar collapse in United Airlines stock. In the wee hours one Sunday morning in September 2008, a single reader browsing back stories on the Orlando Sentinel’s Web site viewed a 2002 article on the bankruptcy of United Airlines (UAL went bankrupt in 2002, but emerged from bankruptcy four years later). That lone Web surfer’s access of this story during such a low-traffic time was enough for the Sentinel’s Web server to briefly list the article as one of the paper’s “most popular.” Google crawled the site and picked up this “popular” news item, feeding it into Google News. Early that morning, a worker in a Florida investment firm came across the Google-fed story, assumed United had yet again filed for bankruptcy, then posted a summary on Bloomberg. Investors scanning Bloomberg jumped on what looked like a reputable early warning of another United bankruptcy, dumping UAL stock. Blame the computers again—the rapid plunge from these early trades caused automatic sell systems to kick in (event-triggered, computer-automated trading is responsible for about 30 percent of all stock trades). Once the machines took over, UAL dropped like a rock, falling from twelve to three dollars. That drop represented the vanishing of \$1 billion in wealth, and all this because no one checked the date on a news story. Welcome to the new world of paying attention (Harvey, 2008)! • Understanding How Data Is Organized: Key Terms and Technologies A database is simply a list (or more likely, several related lists) of data. Most organizations have several databases—perhaps even hundreds or thousands. And these various databases might be focused on any combination of functional areas (sales, product returns, inventory, payroll), geographical regions, or business units. Firms often create specialized databases for recording transactions, as well as databases that aggregate data from multiple sources in order to support reporting and analysis. Databases are created, maintained, and manipulated using programs called database management systems (DBMS), sometimes referred to as database software. DBMS products vary widely in scale and capabilities. They include the single-user, desktop versions of Microsoft Access or Filemaker Pro, Web-based offerings like Intuit QuickBase, and industrial strength products from Oracle, IBM (DB2), Sybase, Microsoft (SQL Server), and others. Oracle is the world’s largest database software vendor, and database software has meant big bucks for Oracle cofounder and CEO Larry Ellison. Ellison perennially ranks in the Top 10 of the Forbes 400 list of wealthiest Americans. The acronym SQL (often pronounced sequel) also shows up a lot when talking about databases. Structured query language (SQL) is by far the most common language for creating and manipulating databases. You’ll find variants of SQL inhabiting everything from lowly desktop software, to high-powered enterprise products. Microsoft’s high-end database is even called SQL Server. And of course there’s also the open source MySQL (whose stewardship now sits with Oracle as part of the firm’s purchase of Sun Microsystems). Given this popularity, if you’re going to learn one language for database use, SQL’s a pretty good choice. And for a little inspiration, visit Monster.com or another job site and search for jobs mentioning SQL. You’ll find page after page of listings, suggesting that while database systems have been good for Ellison, learning more about them might be pretty good for you, too. Even if you don’t become a database programmer or database administrator (DBA), you’re almost surely going to be called upon to dive in and use a database. You may even be asked to help identify your firm’s data requirements. It’s quite common for nontech employees to work on development teams with technical staff, defining business problems, outlining processes, setting requirements, and determining the kinds of data the firm will need to leverage. Database systems are powerful stuff, and can’t be avoided, so a bit of understanding will serve you well. Figure 11.1 A Simplified Relational Database for a University Course Registration System
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/11%3A_The_Data_Asset-_Databases_Business_Intelligence_and_Competitive_Advantage/11.02%3A_Data_Information_and_Knowledge.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand various internal and external sources for enterprise data. 2. Recognize the function and role of data aggregators, the potential for leveraging third-party data, the strategic implications of relying on externally purchased data, and key issues associated with aggregators and firms that leverage externally sourced data. Organizations can pull together data from a variety of sources. While the examples that follow aren’t meant to be an encyclopedic listing of possibilities, they will give you a sense of the diversity of options available for data gathering. • Transaction Processing Systems For most organizations that sell directly to their customers, transaction processing systems (TPS) represent a fountain of potentially insightful data. Every time a consumer uses a point-of-sale system, an ATM, or a service desk, there’s a transaction (some kind of business exchange) occurring, representing an event that’s likely worth tracking. The cash register is the data generation workhorse of most physical retailers, and the primary source that feeds data to the TPS. But while TPS can generate a lot of bits, it’s sometimes tough to match this data with a specific customer. For example, if you pay a retailer in cash, you’re likely to remain a mystery to your merchant because your name isn’t attached to your money. Grocers and retailers can tie you to cash transactions if they can convince you to use a loyalty card. Use one of these cards and you’re in effect giving up information about yourself in exchange for some kind of financial incentive. The explosion in retailer cards is directly related to each firm’s desire to learn more about you and to turn you into a more loyal and satisfied customer. Some cards provide an instant discount (e.g., the CVS Pharmacy ExtraCare card), while others allow you to build up points over time (Best Buy’s Reward Zone). The latter has the additional benefit of acting as a switching cost. A customer may think “I could get the same thing at Target, but at Best Buy, it’ll increase my existing points balance and soon I’ll get a cash back coupon.” Tesco: Tracked Transactions, Increased Insights, and Surging Sales UK grocery giant Tesco, the planet’s third-largest retailer, is envied worldwide for what analysts say is the firm’s unrivaled ability to collect vast amounts of retail data and translate this into sales (Capell, 2008). Tesco’s data collection relies heavily on its ClubCard loyalty program, an effort pioneered back in 1995. But Tesco isn’t just a physical retailer. As the world’s largest Internet grocer, the firm gains additional data from Web site visits, too. Remove products from your virtual shopping cart? Tesco can track this. Visited a product comparison page? Tesco watches which product you’ve chosen to go with and which you’ve passed over. Done your research online, then traveled to a store to make a purchase? Tesco sees this, too. Tesco then mines all this data to understand how consumers respond to factors such as product mix, pricing, marketing campaigns, store layout, and Web design. Consumer-level targeting allows the firm to tailor its marketing messages to specific subgroups, promoting the right offer through the right channel at the right time and the right price. To get a sense of Tesco’s laser-focused targeting possibilities, consider that the firm sends out close to ten million different, targeted offers each quarter (Davenport & Harris, 2007). Offer redemption rates are the best in the industry, with some coupons scoring an astronomical 90 percent usage (Lowenstein, 2002)! The firm’s data-driven management is clearly delivering results. In April 2009, while operating in the teeth of a global recession, Tesco posted record corporate profits and the highest earnings ever for a British retailer (Capell, 2009).
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/11%3A_The_Data_Asset-_Databases_Business_Intelligence_and_Competitive_Advantage/11.03%3A_Where_Does_Data_Come_From.txt
Learning Objectives After studying this section you should be able to do the following: 1. Know and be able to list the reasons why many organizations have data that can’t be converted to actionable information. 2. Understand why transactional databases can’t always be queried and what needs to be done to facilitate effective data use for analytics and business intelligence. 3. Recognize key issues surrounding data and privacy legislation. Despite being awash in data, many organizations are data rich but information poor. A survey by consulting firm Accenture found 57 percent of companies reporting that they didn’t have a beneficial, consistently updated, companywide analytical capability. Among major decisions, only 60 percent were backed by analytics—40 percent were made by intuition and gut instinct (King, 2009). The big culprit limiting BI initiatives is getting data into a form where it can be used, analyzed, and turned into information. Here’s a look at some factors holding back information advantages. • Incompatible Systems Just because data is collected doesn’t mean it can be used. This limit is a big problem for large firms that have legacy systems, outdated information systems that were not designed to share data, aren’t compatible with newer technologies, and aren’t aligned with the firm’s current business needs. The problem can be made worse by mergers and acquisitions, especially if a firm depends on operational systems that are incompatible with its partner. And the elimination of incompatible systems isn’t just a technical issue. Firms might be under extended agreement with different vendors or outsourcers, and breaking a contract or invoking an escape clause may be costly. Folks working in M&A (the area of investment banking focused on valuing and facilitating mergers and acquisitions) beware—it’s critical to uncover these hidden costs of technology integration before deciding if a deal makes financial sense. Legacy Systems: A Prison for Strategic Assets The experience of one Fortune 100 firm that your author has worked with illustrates how incompatible information systems can actually hold back strategy. This firm was the largest in its category, and sold identical commodity products sourced from its many plants worldwide. Being the biggest should have given the firm scale advantages. But many of the firm’s manufacturing facilities and international locations developed or purchased separate, incompatible systems. Still more plants were acquired through acquisition, each coming with its own legacy systems. The plants with different information systems used different part numbers and naming conventions even though they sold identical products. As a result, the firm had no timely information on how much of a particular item was sold to which worldwide customers. The company was essentially operating as a collection of smaller, regional businesses, rather than as the worldwide behemoth that it was. After the firm developed an information system that standardized data across these plants, it was, for the first time, able to get a single view of worldwide sales. The firm then used this data to approach their biggest customers, negotiating lower prices in exchange for increased commitments in worldwide purchasing. This trade let the firm take share from regional rivals. It also gave the firm the ability to shift manufacturing capacity globally, as currency prices, labor conditions, disaster, and other factors impacted sourcing. The new information system in effect liberated the latent strategic asset of scale, increasing sales by well over a billion and a half dollars in the four years following implementation.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/11%3A_The_Data_Asset-_Databases_Business_Intelligence_and_Competitive_Advantage/11.04%3A_Data_Rich_Information_Poor.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand what data warehouses and data marts are and the purpose they serve. 2. Know the issues that need to be addressed in order to design, develop, deploy, and maintain data warehouses and data marts. Since running analytics against transactional data can bog down a system, and since most organizations need to combine and reformat data from multiple sources, firms typically need to create separate data repositories for their reporting and analytics work—a kind of staging area from which to turn that data into information. Two terms you’ll hear for these kinds of repositories are data warehouse and data mart. A data warehouse is a set of databases designed to support decision making in an organization. It is structured for fast online queries and exploration. Data warehouses may aggregate enormous amounts of data from many different operational systems. A data mart is a database focused on addressing the concerns of a specific problem (e.g., increasing customer retention, improving product quality) or business unit (e.g., marketing, engineering). Marts and warehouses may contain huge volumes of data. For example, a firm may not need to keep large amounts of historical point-of-sale or transaction data in its operational systems, but it might want past data in its data mart so that managers can hunt for patterns and trends that occur over time. Figure 11.2 Information systems supporting operations (such as TPS) are typically separate, and “feed” information systems used for analytics (such as data warehouses and data marts). It’s easy for firms to get seduced by a software vendor’s demonstration showing data at your fingertips, presented in pretty graphs. But as mentioned earlier, getting data in a format that can be used for analytics is hard, complex, and challenging work. Large data warehouses can cost millions and take years to build. Every dollar spent on technology may lead to five to seven more dollars on consulting and other services (King, 2009). Most firms will face a trade-off—do we attempt a large-scale integration of the whole firm, or more targeted efforts with quicker payoffs? Firms in fast-moving industries or with particularly complex businesses may struggle to get sweeping projects completed in enough time to reap benefits before business conditions change. Most consultants now advise smaller projects with narrow scope driven by specific business goals (Rigby & Ledingham, 2004; King, 2009). Firms can eventually get to a unified data warehouse but it may take time. Even analytics king Wal-Mart is just getting to that point. In 2007, it was reported that Wal-Mart had seven hundred different data marts and hired Hewlett-Packard for help in bringing the systems together to form a more integrated data warehouse (Havenstein, 2007). The old saying from the movie Field of Dreams, “If you build it, they will come,” doesn’t hold up well for large-scale data analytics projects. This work should start with a clear vision with business-focused objectives. When senior executives can see objectives illustrated in potential payoff, they’ll be able to champion the effort, and experts agree, having an executive champion is a key success factor. Focusing on business issues will also drive technology choice, with the firm better able to focus on products that best fit its needs. Once a firm has business goals and hoped-for payoffs clearly defined, it can address the broader issues needed to design, develop, deploy, and maintain its system1:/p> • Data relevance. What data is needed to compete on analytics and to meet our current and future goals? • Data sourcing. Can we even get the data we’ll need? Where can this data be obtained from? Is it available via our internal systems? Via third-party data aggregators? Via suppliers or sales partners? Do we need to set up new systems, surveys, and other collection efforts to acquire the data we need? • Data quantity. How much data is needed? • Data quality. Can our data be trusted as accurate? Is it clean, complete, and reasonably free of errors? How can the data be made more accurate and valuable for analysis? Will we need to ‘scrub,’ calculate, and consolidate data so that it can be used? • Data hosting. Where will the systems be housed? What are the hardware and networking requirements for the effort? • Data governance. What rules and processes are needed to manage data from its creation through its retirement? Are there operational issues (backup, disaster recovery)? Legal issues? Privacy issues? How should the firm handle security and access? For some perspective on how difficult this can be, consider that an executive from one of the largest U.S. banks once lamented at how difficult it was to get his systems to do something as simple as properly distinguishing between men and women. The company’s customer-focused data warehouse drew data from thirty-six separate operational systems—bank teller systems, ATMs, student loan reporting systems, car loan systems, mortgage loan systems, and more. Collectively these legacy systems expressed gender in seventeen different ways: “M” or “F”; “m” or “f”; “Male” or “Female”; “MALE” or “FEMALE”; “1” for man, “0” for woman; “0” for man, “1” for woman and more, plus various codes for “unknown.” The best math in the world is of no help if the values used aren’t any good. There’s a saying in the industry, “garbage in, garbage out.” E-discovery: Supporting Legal Inquiries Data archiving isn’t just for analytics. Sometimes the law requires organizations to dive into their electronic records. E-discovery refers to identifying and retrieving relevant electronic information to support litigation efforts. E-discovery is something a firm should account for in its archiving and data storage plans. Unlike analytics that promise a boost to the bottom line, there’s no profit in complying with a judge’s order—it’s just a sunk cost. But organizations can be compelled by court order to scavenge their bits, and the cost to uncover difficult to access data can be significant, if not planned for in advance. In one recent example, the Office of Federal Housing Enterprise Oversight (OFHEO) was subpoenaed for documents in litigation involving mortgage firms Fannie Mae and Freddie Mac. Even though the OFHEO wasn’t a party in the lawsuit, the agency had to comply with the search—an effort that cost \$6 million, a full 9 percent of its total yearly budget (Conry-Murray, 2009). Key Takeaways • Data warehouses and data marts are repositories for large amounts of transactional data awaiting analytics and reporting. • Large data warehouses are complex, can cost millions, and take years to build. Questions and Exercises 1. List the issues that need to be addressed in order to design, develop, deploy, and maintain data warehouses and data marts. 2. What is meant by “data relevance”? 3. What is meant by “data governance”? 4. What is the difference between a data mart and a data warehouse? 5. Why are data marts and data warehouses necessary? Why can’t an organization simply query its transactional database? 6. How can something as simple as customer gender be difficult to for a large organization to establish in a data warehouse? 1Key points adapted from Davenport and J. Harris, Competing on Analytics: The New Science of Winning (Boston: Harvard Business School Press, 2007).
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/11%3A_The_Data_Asset-_Databases_Business_Intelligence_and_Competitive_Advantage/11.05%3A_Data_Warehouses_and_Data_Marts.txt
Learning Objectives After studying this section you should be able to do the following: 1. Know the tools that are available to turn data into information. 2. Identify the key areas where businesses leverage data mining. 3. Understand some of the conditions under which analytical models can fail. 4. Recognize major categories of artificial intelligence and understand how organizations are leveraging this technology. So far we’ve discussed where data can come from, and how we can get data into a form where we can use it. But how, exactly, do firms turn that data into information? That’s where the various software tools of business intelligence (BI) and analytics come in. Potential products in the business intelligence toolkit range from simple spreadsheets to ultrasophisticated data mining packages leveraged by teams employing “rocket-science” mathematics. • Query and Reporting Tools The idea behind query and reporting tools is to present users with a subset of requested data, selected, sorted, ordered, calculated, and compared, as needed. Managers use these tools to see and explore what’s happening inside their organizations. Canned reports provide regular summaries of information in a predetermined format. They’re often developed by information systems staff and formats can be difficult to alter. By contrast, ad hoc reporting tools allow users to dive in and create their own reports, selecting fields, ranges, and other parameters to build their own reports on the fly. Dashboards provide a sort of heads-up display of critical indicators, letting managers get a graphical glance at key performance metrics. Some tools may allow data to be exported into spreadsheets. Yes, even the lowly spreadsheet can be a powerful tool for modeling “what if” scenarios and creating additional reports (of course be careful: if data can be easily exported, then it can potentially leave the firm dangerously exposed, raising privacy, security, legal, and competitive concerns). Figure 11.3 The Federal IT Dashboard The Federal IT dashboard offers federal agencies, and the general public, information about the government’s IT investments. A subcategory of reporting tools is referred to as online analytical processing (OLAP) (pronounced “oh-lap”). Data used in OLAP reporting is usually sourced from standard relational databases, but it’s calculated and summarized in advance, across multiple dimensions, with the data stored in a special database called a data cube. This extra setup step makes OLAP fast (sometimes one thousand times faster than performing comparable queries against conventional relational databases). Given this kind of speed boost, it’s not surprising that data cubes for OLAP access are often part of a firm’s data mart and data warehouse efforts. A manager using an OLAP tool can quickly explore and compare data across multiple factors such as time, geography, product lines, and so on. In fact, OLAP users often talk about how they can “slice and dice” their data, “drilling down” inside the data to uncover new insights. And while conventional reports are usually presented as a summarized list of information, OLAP results look more like a spreadsheet, with the various dimensions of analysis in rows and columns, with summary values at the intersection. Public Sector Reporting Tools in Action: Fighting Crime and Fighting Waste Access to ad hoc query and reporting tools can empower all sorts of workers. Consider what analytics tools have done for the police force in Richmond, Virginia. The city provides department investigators with access to data from internal sources such as 911 logs and police reports, and combines this with outside data including neighborhood demographics, payday schedules, weather reports, traffic patterns, sports events, and more. Experienced officers dive into this data, exploring when and where crimes occur. These insights help the department decide how to allocate its limited policing assets to achieve the biggest impact. While IT staffers put the system together, the tools are actually used by officers with expertise in fighting street crime—the kinds of users with the knowledge to hunt down trends and interpret the causes behind the data. And it seems this data helps make smart cops even smarter—the system is credited with delivering a single-year crime-rate reduction of 20 percent (Lohr, 2007). As it turns out, what works for cops also works for bureaucrats. When administrators for Albuquerque were given access to ad hoc reporting systems, they uncovered all sorts of anomalies, prompting excess spending cuts on everything from cell phone usage to unnecessarily scheduled overtime. And once again, BI performed for the public sector. The Albuquerque system delivered the equivalent of \$2 million in savings in just the first three weeks it was used (Mulcahy, 2007).
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/11%3A_The_Data_Asset-_Databases_Business_Intelligence_and_Competitive_Advantage/11.06%3A_The_Business_Intelligence_Toolkit.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand how Wal-Mart has leveraged information technology to become the world’s largest retailer. 2. Be aware of the challenges that face Wal-Mart in the years ahead. Wal-Mart demonstrates how a physical product retailer can create and leverage a data asset to achieve world-class supply chain efficiencies targeted primarily at driving down costs. Wal-Mart isn’t just the largest retailer in the world, over the past several years it has popped in and out of the top spot on the Fortune 500 list—meaning that the firm has had revenues greater than any firm in the United States. Wal-Mart is so big that in three months it sells more than a whole year’s worth of sales at number two U.S. retailer, Home Depot1. At that size, it’s clear that Wal-Mart’s key source of competitive advantage is scale. But firms don’t turn into giants overnight. Wal-Mart grew in large part by leveraging information systems to an extent never before seen in the retail industry. Technology tightly coordinates the Wal-Mart value chain from tip to tail, while these systems also deliver a mineable data asset that’s unmatched in U.S. retail. To get a sense of the firm’s overall efficiencies, at the end of the prior decade a McKinsey study found that Wal-Mart was responsible for some 12 percent of the productivity gains in the entire U.S. economy (Fishman, 2007). The firm’s capacity as a systems innovator is so respected that many senior Wal-Mart IT executives have been snatched up for top roles at Dell, HP, Amazon, and Microsoft. And lest one think that innovation is the province of only those located in the technology hubs of Silicon Valley, Boston, and Seattle, remember that Wal-Mart is headquartered in Bentonville, Arkansas. • A Data-Driven Value Chain The Wal-Mart efficiency dance starts with a proprietary system called Retail Link, a system originally developed in 1991 and continually refined ever since. Each time an item is scanned by a Wal-Mart cash register, Retail Link not only records the sale, it also automatically triggers inventory reordering, scheduling, and delivery. This process keeps shelves stocked, while keeping inventories at a minimum. An AMR report ranked Wal-Mart as having the seventh best supply chain in the country (the only other retailer in the top twenty was Tesco, at number fifteen) (Friscia, et. al., 2009). The firm’s annual inventory turnover ratio of 8.5 means that Wal-Mart sells the equivalent of its entire inventory roughly every six weeks (by comparison, Target’s turnover ratio is 6.4, Sears’ is 3.4, and the average for U.S. retail is less than 2)2. Back-office scanners keep track of inventory as supplier shipments come in. Suppliers are rated based on timeliness of deliveries, and you’ve got to be quick to work with Wal-Mart. In order to avoid a tractor-trailer traffic jam in store parking lots, deliveries are choreographed to arrive at intervals less than ten minutes apart. When Levi’s joined Wal-Mart, the firm had to guarantee it could replenish shelves every two days—no prior retailer had required a shorter than five day window from Levi’s (Fishman, 2007). Wal-Mart has been a catalyst for technology adoption among its suppliers. The firm is currently leading an adoption effort that requires partners to leverage RFID technology to track and coordinate inventories. While the rollout has been slow, a recent P&G trial showed RFID boosted sales nearly 20 percent by ensuring that inventory was on shelves and located where it should be (Joseph, 2009). • Data Mining Prowess Wal-Mart also mines its mother lode of data to get its product mix right under all sorts of varying environmental conditions, protecting the firm from “a retailer’s twin nightmares: too much inventory, or not enough” (Hays, 2004). For example, the firm’s data mining efforts informed buyers that customers stock up on certain products in the days leading up to predicted hurricanes. Bumping up prestorm supplies of batteries and bottled water was a no brainer, but the firm also learned that Pop-Tarts sales spike seven fold before storms hit, and that beer is the top prestorm seller. This insight has lead to truckloads full of six packs and toaster pastries streaming into gulf states whenever word of a big storm surfaces (Hays, 2004). Data mining also helps the firm tighten operational forecasts, helping to predict things like how many cashiers are needed at a given store at various times of day throughout the year. Data drives the organization, with mined reports forming the basis of weekly sales meetings, as well as executive strategy sessions. • Sharing Data, Keeping Secrets While Wal-Mart is demanding of its suppliers, it also shares data with them, too. Data can help firms become more efficient so that Wal-Mart can keep dropping prices, and data can help firms uncover patterns that help suppliers sell more. P&G’s Gillette unit, for example, claims to have mined Wal-Mart data to develop promotions that increased sales as much as 19 percent. More than seventeen thousand suppliers are given access to their products’ Wal-Mart performance across metrics that include daily sales, shipments, returns, purchase orders, invoices, claims and forecasts. And these suppliers collectively interrogate Wal-Mart data warehouses to the tune of twenty-one million queries a year (Evans-Correia, 2006). While Wal-Mart shares sales data with relevant suppliers, the firm otherwise fiercely guards this asset. Many retailers pool their data by sharing it with information brokers like Information Resources and ACNielsen. This sharing allows smaller firms to pool their data to provide more comprehensive insight on market behavior. But Wal-Mart stopped sharing data with these agencies years ago. The firm’s scale is so big, the additional data provided by brokers wasn’t adding much value, and it no longer made sense to allow competitors access to what was happening in its own huge chunk of retail sales. Other aspects of the firm’s technology remain under wraps, too. Wal-Mart custom builds large portions of its information systems to keep competitors off its trail. As for infrastructure secrets, the Wal-Mart Data Center in McDonald County, Missouri, was considered so off limits that the county assessor was required to sign a nondisclosure statement before being allowed on-site to estimate property value (McCoy, 2006). • Challenges Abound But despite success, challenges continue. While Wal-Mart grew dramatically throughout the 1990s, the firm’s U.S. business has largely matured. And as a mature business it faces a problem not unlike the example of Microsoft discussed at the end of Chapter 14 “Google: Search, Online Advertising, and Beyond”; Wal-Mart needs to find huge markets or dramatic cost savings in order to boost profits and continue to move its stock price higher. The firm’s aggressiveness and sheer size also increasingly make Wal-Mart a target for criticism. Those low prices come at a price, and the firm has faced accusations of subpar wages and remains a magnet for union activists. Others had identified poor labor conditions at some of the firm’s contract manufacturers. Suppliers that compete for Wal-Mart’s business are often faced with a catch-22. If they bypass Wal-Mart they miss out on the largest single chunk of world retail sales. But if they sell to Wal-Mart, the firm may demand prices so aggressively low that suppliers end up cannibalizing their own sales at other retailers. Still more criticism comes from local citizen groups that have accused Wal-Mart of ruining the market for mom-and-pop stores (Fishman, 2007). While some might see Wal-Mart as invincibly standing at the summit of world retail, it’s important to note that other megaretailers have fallen from grace. In the 1920s and 1930s, the A&P grocery chain once controlled 80 percent of U.S. grocery sales, at its peak operating five times the number of stores that Wal-Mart has today. But market conditions changed, and the government stepped in to draft antipredatory pricing laws when it felt A&Ps parent was too aggressive. For all of Wal-Mart’s data brilliance, historical data offers little insight on how to adapt to more radical changes in the retail landscape. The firm’s data warehouse wasn’t able to foretell the rise of Target and other up-market discounters. And yet another major battle is brewing, as Tesco methodically attempts to take its globally honed expertise to U.S. shores. Savvy managers recognize that data use is a vital tool, but not the only tool in management’s strategic arsenal. Key Takeaways • Wal-Mart demonstrates how a physical product retailer can create and leverage a data asset to achieve world-class value chain efficiencies. • Wal-Mart uses data mining in numerous ways, from demand forecasting to predicting the number of cashiers needed at a store at a particular time. • To help suppliers become more efficient, and as a result lower prices, Wal-Mart shares data with them. • Despite its success, Wal-Mart is a mature business that needs to find huge markets or dramatic cost savings in order to boost profits and continue to move its stock price higher. The firm’s success also makes it a high impact target for criticism and activism. And the firm’s data assets could not predict impactful industry trends such as the rise of Target and other upscale discounters. Questions and Exercises 1. List the functions performed by Retail Link. What is its benefit to Wal-Mart? 2. Which supplier metrics does Retail Link gather and report? How is this valuable to Wal-Mart and suppliers? 3. Name the technology does Wal-Mart require partners to use to track and coordinate inventory. Do you know of other uses for this technology? 4. What steps has Wal-Mart taken to protect its data from competitors? 5. List the criticisms leveled at Wal-Mart. Do you think these critiques are valid or not? What can Wal-Mart do to counteract this criticism? Should it take these steps? Why or why not?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/11%3A_The_Data_Asset-_Databases_Business_Intelligence_and_Competitive_Advantage/11.07%3A_Data_Asset_in_Action-_Technology_and_the_Rise_of_Wal.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand how Harrah’s has used IT to move from an also-ran chain of casinos to become the largest gaming company based on revenue. 2. Name some of the technology innovations that Harrah’s is using to help it gather more data, and help push service quality and marketing program success. Harrah’s Entertainment provides an example of exceptional data asset leverage in the service sector, focusing on how this technology enables world-class service through customer relationship management. Gary Loveman is a sort of management major trifecta. The CEO of Harrah’s Entertainment is a former operations professor who has leveraged information technology to create what may be the most effective marketing organization in the service industry. If you ever needed an incentive to motivate you for cross-disciplinary thinking, Loveman provides it. Harrah’s has leveraged its data-powered prowess to move from an also-ran chain of casinos to become the largest gaming company by revenue. The firm operates some fifty-three casinos, employing more than eighty-five thousand workers on five continents. Brands include Harrah’s, Caesars Palace, Bally’s, Horseshoe, and Paris Las Vegas. Under Loveman, Harrah’s has aggressively swallowed competitors, the firm’s \$9.4 billion buyout of Caesars Entertainment being its largest deal to date. • Collecting Data Data drives the firm. Harrah’s collects customer data on just about everything you might do at their properties—gamble, eat, grab a drink, attend a show, stay in a room. The data’s then used to track your preferences and to size up whether you’re the kind of customer that’s worth pursuing. Prove your worth, and the firm will surround you with top-tier service and develop a targeted marketing campaign to keep wooing you back (Magnini, et. al., 2003). The ace in the firm’s data collection hole is its Total Rewards loyalty card system. Launched over a decade ago, the system is constantly being enhanced by an IT staff of seven hundred, with an annual budget in excess of \$100 million (Swabey, 2007). Total Rewards is an opt-in loyalty program, but customers consider the incentives to be so good that the card is used by some 80 percent of Harrah’s patrons, collecting data on over forty-four million customers (Wagner, 2008; Haugsted, 2007). Customers signing up for the card provide Harrah’s with demographic information such as gender, age, and address. Visitors then present the card for various transactions. Slide it into a slot machine, show it to the restaurant hostess, present it to the parking valet, share your account number with a telephone reservation specialist—every contact point is an opportunity to collect data. Between three hundred thousand and one million customers come through Harrah’s doors daily, adding to the firm’s data stash and keeping that asset fresh (Hoover, 2007). • Who Are the Most Valuable Customers? All that data is heavily and relentlessly mined. Customer relationship management should include an assessment to determine which customers are worth having a relationship with. And because Harrah’s has so much detailed historical data, the firm can make fairly accurate projections of customer lifetime value (CLV). CLV represents the present value of the likely future income stream generated by an individual purchaser1. Once you know this, you can get a sense of how much you should spend to keep that customer coming back. You can size them up next to their peer group and if they fall below expectations you can develop strategies to improve their spending. The firm tracks over ninety demographic segments, and each responds differently to different marketing approaches. Identifying segments and figuring out how to deal with each involves an iterative model of mining the data to identify patterns, creating a hypothesis (customers in group X will respond to a free steak dinner; group Y will want ten dollars in casino chips), then testing that hypothesis against a control group, turning again to analytics to statistically verify the outcome. The firm runs hundreds of these small, controlled experiments each year. Loveman says that when marketers suggest new initiatives, “I ask, did we test it first? And if I find out that we just whole-hogged, went after something without testing it, I’ll kill ’em. No matter how clever they think it is, we test it” (Nickell, 2002). The former ops professor is known to often quote quality guru W. Edwards Deming, saying, “In God we trust; all others must bring data.” When Harrah’s began diving into the data, they uncovered patterns that defied the conventional wisdom in the gaming industry. Big money didn’t come from European princes, Hong Kong shipping heirs, or the Ocean’s 11 crowd—it came from locals. The less than 30 percent of customers who spent between one hundred and five hundred dollars per visit accounted for over 80 percent of revenues and nearly 100 percent of profits (Swabey, 2007). The data also showed that the firm’s most important customers weren’t the families that many Vegas competitors were trying to woo with Disneyland-style theme casinos—it was Grandma! Harrah’s focuses on customers forty-five years and older: twenty-somethings have no money, while thirty-somethings have kids and are too busy. To the premiddle-aged crowd, Loveman says, “God bless you, but we don’t need you” (Haugsted, 2007). • Data-Driven Service: Get Close (but Not Too Close) to Your Customers The names for reward levels on the Total Rewards card convey increasing customer value—Gold, Diamond, and Platinum. Spend more money at Harrah’s and you’ll enjoy shorter lines, discounts, free items, and more. And if Harrah’s systems determine you’re a high-value customer, expect white-glove treatment. The firm will lavish you with attention, using technology to try to anticipate your every need. Customers notice the extra treatment that top-tier Total Rewards members receive and actively work to improve their status. To illustrate this, Loveman points to the obituary of an Ashville, North Carolina, woman who frequented a casino Harrah’s operates on a nearby Cherokee reservation. “Her obituary was published in the Asheville paper and indicated that at the time of her death, she had several grandchildren, she sang in the Baptist choir and she was a holder of the Harrah’s Diamond Total Rewards card.” Quipped Loveman, “When your loyalty card is listed in someone’s obituary, I would maintain you have traction” (Loveman, 2005). The degree of customer service pushed through the system is astonishing. Upon check in, a Harrah’s customer who enjoys fine dining may find his or her table is reserved, along with tickets for a show afterward. Others may get suggestions or special offers throughout their stay, pushed via text message to their mobile device (Wagner, 2008). The firm even tracks gamblers to see if they’re suffering unusual losses, and Harrah’s will dispatch service people to intervene with a feel-good offer: “Having a bad day? Here’s a free buffet coupon” (Davenport & Harris, 2007). The firm’s CRM effort monitors any customer behavior changes. If a customer who usually spends a few hundred a month hasn’t shown up in a while, the firm’s systems trigger follow-up contact methods such as sending a letter with a promotion offer, or having a rep make a phone call inviting them back (Loveman, 2005). Customers come back to Harrah’s because they feel that those casinos treat them better than the competition. And Harrah’s laser-like focus on service quality and customer satisfaction are embedded into its information systems and operational procedures. Employees are measured on metrics that include speed and friendliness and are compensated based on guest satisfaction ratings. Hourly workers are notoriously difficult to motivate: they tend to be high-turnover, low-wage earners. But at Harrah’s, incentive bonuses depend on an entire location’s ratings. That encourages strong performers to share tips to bring the new guy up to speed. The process effectively changed the corporate culture at Harrah’s from an every-property-for-itself mentality to a collaborative, customer-focused enterprise (Magnini & Honeycutt, 2003). While Harrah’s is committed to learning how to make your customer experience better, the firm is also keenly sensitive to respecting consumer data. The firm has never sold or given away any of its bits to third parties. And the firm admits that some of its efforts to track customers have misfired, requiring special attention to find the sometimes subtitle line between helpful and “too helpful.” For example, the firm’s CIO has mentioned that customers found it “creepy and Big Brother-ish” when employees tried to greet them by name and talk with them about their past business history at Harrah’s, so the firm backed off (Wagner, 2008). • Innovation Harrah’s is constantly tinkering with new innovations that help it gather more data and help push service quality and marketing program success. When the introduction of gaming in Pennsylvania threatened to divert lucrative New York City gamblers from Harrah’s Atlantic City properties, the firm launched an interactive billboard in New York’s Times Square, allowing passers-by to operate a virtual slot machine using text messages from their cell phones. Players dialing into the video billboard not only control the display, they receive text message offers promoting Harrah’s sites in Atlantic City2. At Harrah’s, tech experiments abound. RFID-enabled poker chips and under-table RFID readers allow pit bosses to track and rate game play far better than they could before. The firm is experimenting with using RFID-embedded bracelets for poolside purchases and Total Rewards tracking for when customers aren’t carrying their wallets. The firm has also incorporated drink ordering into gaming machines—why make customers get up to quench their thirst? A break in gambling is a halt in revenue. The firm was also one of the first to sign on to use Microsoft’s Surface technology—a sort of touch-screen and sensor-equipped tabletop. Customers at these tables can play bowling and group pinball games and even pay for drinks using cards that the tables will automatically identify. Tech even helps Harrah’s fight card counters and crooks, with facial recognition software scanning casino patrons to spot the bad guys (Lohr, 2007). • Strategy A walk around Vegas during Harrah’s ascendency would find rivals with bigger, fancier casinos. Says Loveman, “We had to compete with the kind of place that God would build if he had the money.…The only thing we had was data” (Swabey, 2007). That data advantage creates intelligence for a high-quality and highly personal customer experience. Data gives the firm a service differentiation edge. The loyalty program also represents a switching cost. And these assets combined to be leveraged across a firm that has gained so much scale that it’s now the largest player in its industry, gaining the ability to cross-sell customers on a variety of properties—Vegas vacations, riverboat gambling, locally focused reservation properties, and more. Harrah’s chief marketing officer, David Norton, points out that when Total Rewards started, Harrah’s was earning about thirty-six cents on every dollar customers spent gaming—the rest went to competitors. A climb to forty cents would be considered monstrous. By 2005 that number had climbed to forty-five cents, making Harrah’s the biggest monster in the industry (Lundquist, 2005). Some of the firm’s technology investments have paid back tenfold in just two years—bringing in hundreds of millions of dollars (Swabey, 2007). The firm’s technology has been pretty tough for others to match, too. Harrah’s holds several patents covering key business methods and technologies used in its systems. After being acquired by Harrah’s, employees of Caesars lamented that they had, for years, unsuccessfully attempted to replicate Harrah’s systems without violating the firm’s intellectual property (Hoover, 2007). • Challenges Harrah’s efforts to gather data, extract information, and turn this into real profits is unparalleled, but it’s not a cure-all. Broader events can often derail even the best strategy. Gaming is a discretionary spending item, and when the economy tanks, gambling is one of the first things consumers will cut. Harrah’s has not been immune to the world financial crisis and experienced a loss in 2008. Also note that if you look up Harrah’s stock symbol you won’t find it. The firm was taken private in January 2008, when buyout firms Apollo Management and TPG Capital paid \$30.7 billion for all of the firm’s shares. At that time Loveman signed a five-year deal to remain on as CEO, and he’s spoken positively about the benefits of being private—primarily that with the distraction of quarterly earnings off the table, he’s been able to focus on the long-term viability and health of the business (Knightly, 2009). But the firm also holds \$24 billion in debt from expansion projects and the buyout, all at a time when economic conditions have not been favorable to leveraged firms (Lattman, 2009). A brilliantly successful firm that developed best-in-class customer relationship management in now in a position many consider risky due to debt assumed as part of an overly optimistic buyout occurring at precisely the time when the economy went into a terrible funk. Harrah’s awesome risk-reducing, profit-pushing analytics failed to offer any insight on the wisdom (or risk) in the debt and private equity deals. Key Takeaways • Harrah’s Entertainment provides an example of exceptional data asset leverage in the service sector, focusing on how this technology enables world-class service through customer relationship management. • Harrah’s uses its Total Rewards loyalty card system to collect customer data on just about everything you might do at their properties—gamble, eat, drink, see a show, stay in a room, and so on. • Individual customers signing up for the Total Rewards loyalty card provide Harrah’s with demographic information such as gender, age, and address, which is combined with transactional data as the card is used. • Data mining also provides information about ninety-plus customer demographic segments, each of which responds differently to different marketing approaches. • If Harrah’s systems determine you’re a high-value customer, you can expect a higher level of perks and service. • Harrah’s CRM effort monitors any customer behavior changes. • Harrah’s uses its information systems and operating procedures to measure employees based on metrics that include speed and friendliness, and compensates them based on guest satisfaction ratings. Questions and Exercises 1. What types of customer data does Harrah’s gather? 2. How is the data that Harrah’s collects used? 3. Describe Harrah’s most valuable customers? Approximately what percentage of profits does this broad group deliver to the firm? 4. List the services a Rewards Card cardholder might expect. 5. What happens when a good, regular customer stops showing up? 6. Describe how Harrah’s treats customer data. 7. List some of the technology innovations that Harrah’s is using to help it gather more data, and help push service quality and marketing program success. 8. How does Harrah’s Total Rewards loyalty card system represent a switching cost? 9. What is customer lifetime value? Do you think this is an easier metric to calculate at Harrah’s or Wal-Mart? Why? 10. How did intellectual property protection benefit Harrah’s? 11. Discuss the challenges Harrah’s may have to confront in the near future. 12. Describe the role that testing plays in initiatives? What advantage does testing provide the firm? What’s the CEO’s attitude to testing? Do you agree with this level of commitment? Why or why not?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/11%3A_The_Data_Asset-_Databases_Business_Intelligence_and_Competitive_Advantage/11.08%3A_Data_Asset_in_Action-_Harrahs_Solid_Gold_CRM_for_the.txt
There’s all sorts of hidden magic happening whenever you connect to the Internet. But what really makes it possible for you to reach servers halfway around the world in just a fraction of a second? Knowing this is not only flat-out fascinating stuff; it’s also critically important for today’s manager to have at least a working knowledge of how the Internet functions. That’s because the Internet is a platform of possibilities and a business enabler. Understanding how the Internet and networking works can help you brainstorm new products and services and understand roadblocks that might limit turning your ideas into reality. Marketing professionals who know how the Internet reaches consumers have a better understanding of how technologies can be used to find and target customers. Finance firms that rely on trading speed to move billions in the blink of an eye need to master Internet infrastructure to avoid being swept aside by more nimble market movers. And knowing how the Internet works helps all managers understand where their firms are vulnerable. In most industries today, if your network goes down then you might as well shut your doors and go home; it’s nearly impossible to get anything done if you can’t get online. Managers who know the Net are prepared to take the appropriate steps to secure their firms and keep their organization constantly connected. 12.02: Internet 101- Understanding How the Internet Works Learning Objectives After studying this section you should be able to do the following: 1. Describe how the technologies of the Internet combine to answer these questions: What are you looking for? Where is it? And how do we get there? 2. Interpret a URL, understand what hosts and domains are, describe how domain registration works, describe cybersquatting, and give examples of conditions that constitute a valid and invalid domain-related trademark dispute. 3. Describe certain aspects of the Internet infrastructure that are fault-tolerant and support load balancing. 4. Discuss the role of hosts, domains, IP addresses, and the DNS in making the Internet work. The Internet is a network of networks—millions of them, actually. If the network at your university, your employer, or in your home has Internet access, it connects to an Internet service provider (ISP). Many (but not all) ISPs are big telecommunications companies like Verizon, Comcast, and AT&T. These providers connect to one another, exchanging traffic, and ensuring your messages can get to any other computer that’s online and willing to communicate with you. The Internet has no center and no one owns it. That’s a good thing. The Internet was designed to be redundant and fault-tolerant—meaning that if one network, connecting wire, or server stops working, everything else should keep on running. Rising from military research and work at educational institutions dating as far back as the 1960s, the Internet really took off in the 1990s, when graphical Web browsing was invented, and much of the Internet’s operating infrastructure was transitioned to be supported by private firms rather than government grants. Figure 12.1 The Internet is a network of networks, and these networks are connected together. In the diagram above, the “state.edu” campus network is connected to other networks of the Internet via two ISPs: Cogent and Verizon. Enough history—let’s see how it all works! If you want to communicate with another computer on the Internet then your computer needs to know the answer to three questions: What are you looking for? Where is it? And how do we get there? The computers and software that make up Internet infrastructure can help provide the answers. Let’s look at how it all comes together. • The URL: “What Are You Looking For?” When you type an address into a Web browser (sometimes called a URL for uniform resource locator), you’re telling your browser what you’re looking for. Figure 12.2 “Anatomy of a Web Address” describes how to read a typical URL. Figure 12.2 Anatomy of a Web Address The URL displayed really says, “Use the Web (http://) to find a host server named ‘www’ in the ‘nytimes.com’ network, look in the ‘tech’ directory, and access the ‘index.html’ file.” The http:// you see at the start of most Web addresses stands for hypertext transfer protocol. A protocol is a set of rules for communication—sort of like grammar and vocabulary in a language like English. The http protocol defines how Web browser and Web servers communicate and is designed to be independent from the computer’s hardware and operating system. It doesn’t matter if messages come from a PC, a Mac, a huge mainframe, or a pocket-sized smartphone; if a device speaks to another using a common protocol, then it will be heard and understood. The Internet supports lots of different applications, and many of these applications use their own application transfer protocol to communicate with each other. The server that holds your e-mail uses something called SMTP, or simple mail transfer protocol, to exchange mail with other e-mail servers throughout the world. FTP, or file transfer protocol, is used for—you guessed it—file transfer. FTP is how most Web developers upload the Web pages, graphics, and other files for their Web sites. Even the Web uses different protocols. When you surf to an online bank or when you’re ready to enter your payment information at the Web site of an Internet retailer, the http at the beginning of your URL will probably change to https (the “s” is for secure). That means that communications between your browser and server will be encrypted for safe transmission. The beauty of the Internet infrastructure is that any savvy entrepreneur can create a new application that rides on top of the Internet. • Hosts and Domain Names The next part of the URL in our diagram holds the host and domain name. Think of the domain name as the name of the network you’re trying to connect to, and think of the host as the computer you’re looking for on that network. Many domains have lots of different hosts. For example, Yahoo!’s main Web site is served from the host named “www” (at the address http://www.yahoo.com), but Yahoo! also runs other hosts including those named “finance” (finance.yahoo.com), “sports” (sports.yahoo.com), and “games” (games.yahoo.com). Host and Domain Names: A Bit More Complex Than That While it’s useful to think of a host as a single computer, popular Web sites often have several computers that work together to share the load for incoming requests. Assigning several computers to a host name offers load balancing and fault tolerance, helping ensure that all visits to a popular site like http://www.google.com won’t overload a single computer, or that Google doesn’t go down if one computer fails. It’s also possible for a single computer to have several host names. This might be the case if a firm were hosting several Web sites on a single piece of computing hardware. Some domains are also further broken down into subdomains—many times to represent smaller networks or subgroups within a larger organization. For example, the address http://www.rhsmith.umd.edu is a University of Maryland address with a host “www” located in the subdomain “rhsmith” for the Robert H. Smith School of Business. International URLs might also include a second-level domain classification scheme. British URLs use this scheme, for example, with the BBC carrying the commercial (.co) designation—http://www.bbc.co.uk—and the University of Oxford carrying the academic (.ac) designation—http://www.ox.ac.uk. You can actually go 127 levels deep in assigning subdomains, but that wouldn’t make it easy on those who have to type in a URL that long. Most Web sites are configured to load a default host, so you can often eliminate the host name if you want to go to the most popular host on a site (the default host is almost always named “www”). Another tip: most browsers will automatically add the “http://” for you, too. Host and domain names are not case sensitive, so you can use a combination of upper and lower case letters and you’ll still get to your destination. I Want My Own Domain You can stake your domain name claim in cyberspace by going through a firm called a domain name registrar. You don’t really buy a domain name; you simply pay a registrar for the right to use that name, with the right renewable over time. While some registrars simply register domain names, others act as Web hosting services that are able to run your Web site on their Internet-connected servers for a fee. Registrars throughout the world are accredited by ICANN (Internet Corporation for Assigning Names and Numbers), a nonprofit governance and standards-setting body. Each registrar may be granted the ability to register domain names in one or more of the Net’s generic top-level domains (gTLDs), such as “.com,” “.net,” or “.org.” There are dozens of registrars that can register “.com” domain names, the most popular gTLD. Some generic top-level domain names, like “.com,” have no restrictions on use, while others limit registration. For example, “.edu” is restricted to U.S.-accredited, postsecondary institutions. ICANN has also announced plans to allow organizations to sponsor their own top-level domains (e.g., “.berlin,” or “.coke”). There are also separate agencies that handle over 250 different two-character country code top-level domains, or ccTLDs (e.g., “.uk” for the United Kingdom and “.jp” for Japan). Servers or organizations generally don’t need to be housed within a country to use a country code as part of their domain names, leading to a number of creatively named Web sites. The URL-shortening site “bit.ly” uses Libya’s “.ly” top-level domain; many physicians are partial to Moldova’s code (“.md”); and the tiny Pacific island nation of Tuvulu might not have a single broadcast television station, but that doesn’t stop it from licensing its country code to firms that want a “.tv” domain name (Maney, 2004). Recent standards also allow domain names in languages that use non-Latin alphabets such as Arabic and Russian. Domain name registration is handled on a first-come, first-served basis and all registrars share registration data to ensure that no two firms gain rights to the same name. Start-ups often sport wacky names, partly because so many domains with common words and phrases are already registered to others. While some domain names are held by legitimate businesses, others are registered by investors hoping to resell a name’s rights. Trade in domain names can be lucrative. For example, the “Insure.com” domain was sold to QuinStreet for \$16 million in fall 2009 (Bosker, 2010). But knowingly registering a domain name to profit from someone else’s firm name or trademark is known as cybersquatting and that’s illegal. The United States has passed the Anticybersquatting Consumer Protection Act (ACPA), and ICANN has the Domain Name Dispute Resolution Policy that can reach across boarders. Try to extort money by holding a domain name that’s identical to (or in some cases, even similar to) a well-known trademark holder and you could be stripped of your domain name and even fined. Courts and dispute resolution authorities will sometimes allow a domain that uses the trademark of another organization if it is perceived to have legitimate, nonexploitive reasons for doing so. For example, the now defunct site Verizonreallysucks.com was registered as a protest against the networking giant and was considered fair use since owners didn’t try to extort money from the telecom giant (Streitfeld, 2000). However, the courts allowed the owner of the PETA trademark (the organization People for the Ethical Treatment of Animals) to claim the domain name peta.org from original registrant, who had been using that domain to host a site called “People Eating Tasty Animals” (McCullagh, 2001). Trying to predict how authorities will rule can be difficult. The musician Sting’s name was thought to be too generic to deserve the rights to Sting.com, but Madonna was able to take back her domain name (for the record, Sting now owns Sting.com) (Knorad & Hansen, 2000). Apple executive Jonathan Ive was denied the right to reclaim domain names incorporating his own name, but that had been registered by another party and without his consent. The publicity-shy design guru wasn’t considered enough of a public figure to warrant protection (Morson, 2009). And sometimes disputing parties can come to an agreement outside of court or ICANN’s dispute resolution mechanisms. When Canadian teenager Michael Rowe registered a site for his part-time Web design business, a firm south of the border took notice of his domain name—Mikerowesoft.com. The two parties eventually settled in a deal that swapped the domain for an Xbox and a trip to the Microsoft Research Tech Fest (Kotadia, 2004). • Path Name and File Name Look to the right of the top-level domain and you might see a slash followed by either a path name, a file name, or both. If a Web address has a path and file name, the path maps to a folder location where the file is stored on the server; the file is the name of the file you’re looking for. Most Web pages end in “.html,” indicating they are in hypertext markup language. While http helps browsers and servers communicate, html is the language used to create and format (render) Web pages. A file, however, doesn’t need to be .html; Web servers can deliver just about any type of file: Acrobat documents (.pdf), PowerPoint documents (.ppt or .pptx), Word docs (.doc or .docx), JPEG graphic images (.jpg), and—as we’ll see in Chapter 13 “Information Security: Barbarians at the Gateway (and Just About Everywhere Else)”—even malware programs that attack your PC. At some Web addresses, the file displays content for every visitor, and at others (like amazon.com), a file will contain programs that run on the Web server to generate custom content just for you. You don’t always type a path or file name as part of a Web address, but there’s always a file lurking behind the scenes. A Web address without a file name will load content from a default page. For example, when you visit “google.com,” Google automatically pulls up a page called “index.html,” a file that contains the Web page that displays the Google logo, the text entry field, the “Google Search” button, and so on. You might not see it, but it’s there. Butterfingers, beware! Path and file names are case sensitive—amazon.com/books is considered to be different from amazon.com/BOOKS. Mistype your capital letters after the domain name and you might get a 404 error (the very unfriendly Web server error code that means the document was not found).
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/12%3A_A_Managers_Guide_to_the_Internet_and_Telecommunications/12.01%3A_Introduction.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand the layers that make up the Internet—application protocol, transmission control protocol, and Internet protocol—and describe why each is important. 2. Discuss the benefits of Internet architecture in general and TCP/IP in particular. 3. Name applications that should use TCP and others that might use UDP. 4. Understand what a router does and the role these devices play in networking. 5. Conduct a traceroute and discuss the output, demonstrating how Internet interconnections work in getting messages from point to point. 6. Understand why mastery of Internet infrastructure is critical to modern finance and be able to discuss the risks in automated trading systems. 7. Describe VoIP, and contrast circuit versus packet switching, along with organizational benefits and limitations of each. • TCP/IP: The Internet’s Secret Sauce OK, we know how to read a Web address, we know that every device connected to the Net needs an IP address, and we know that the DNS can look at a Web address and find the IP address of the machine that you want to communicate with. But how does a Web page, an e-mail, or an iTunes download actually get from a remote computer to your desktop? For our next part of the Internet journey, we’ll learn about two additional protocols: TCP and IP. These protocols are often written as TCP/IP and pronounced by reading all five letters in a row, “T-C-P-I-P” (sometimes they’re also referred to as the Internet protocol suite). TCP and IP are built into any device that a user would use to connect to the Internet—from handhelds to desktops to supercomputers—and together TCP/IP make Internet working happen. Figure 12.4 TCP/IP in Action In this example, a server on the left sends a Web page to the user on the right. The application (the Web server) passes the contents of the page to TCP (which is built into the server’s operating system). TCP slices the Web page into packets. Then IP takes over, forwarding packets from router to router across the Internet until it arrives at the user’s PC. Packets sometimes take different routes, and occasionally arrive out of order. TCP running on the receiving system on the right checks that all packets have arrived, requests that damaged or lost packets be resent, puts them in the right order, and sends a perfect, exact copy of the Web page to your browser. TCP and IP operate below http and the other application transfer protocols mentioned earlier. TCP (transmission control protocol) works its magic at the start and endpoint of the trip—on both your computer and on the destination computer you’re communicating with. Let’s say a Web server wants to send you a large Web page. The Web server application hands the Web page it wants to send to its own version of TCP. TCP then slices up the Web page into smaller chunks of data called packets (or datagrams). The packets are like little envelopes containing part of the entire transmission—they’re labeled with a destination address (where it’s going) and a source address (where it came from). Now we’ll leave TCP for a second, because TCP on the Web server then hands those packets off to the second half of our dynamic duo, IP. It’s the job of IP (Internet protocol) to route the packets to their final destination, and those packets might have to travel over several networks to get to where they’re going. The relay work is done via special computers called routers, and these routers speak to each other and to other computers using IP (since routers are connected to the Internet, they have IP addresses, too. Some are even named). Every computer on the Internet is connected to a router, and all routers are connected to at least one (and usually more than one) other router, linking up the networks that make up the Internet. Routers don’t have perfect, end-to-end information on all points in the Internet, but they do talk to each other all the time, so a router has a pretty good idea of where to send a packet to get it closer to where it needs to end up. This chatter between the routers also keeps the Internet decentralized and fault-tolerant. Even if one path out of a router goes down (a networking cable gets cut, a router breaks, the power to a router goes out), as long as there’s another connection out of that router, then your packet will get forwarded. Networks fail, so good, fault-tolerant network design involves having alternate paths into and out of a network. Once packets are received by the destination computer (your computer in our example), that machine’s version of TCP kicks in. TCP checks that it has all the packets, makes sure that no packets were damaged or corrupted, requests replacement packets (if needed), and then puts the packets in the correct order, passing a perfect copy of your transmission to the program you’re communicating with (an e-mail server, Web server, etc.). This progression—application at the source to TCP at the source (slice up the data being sent), to IP (for forwarding among routers), to TCP at the destination (put the transmission back together and make sure it’s perfect), to application at the destination—takes place in both directions, starting at the server for messages coming to you, and starting on your computer when you’re sending messages to another computer. UDP: TCP’s Faster, Less Reliable Sibling TCP is a perfectionist and that’s what you want for Web transmissions, e-mail, and application downloads. But sometimes we’re willing to sacrifice perfection for speed. You’d make this sacrifice for streaming media applications like Windows Media Player, Real Player, Internet voice chat, and video conferencing. Having to wait to make sure each packet is perfectly sent would otherwise lead to awkward pauses that interrupt real-time listening. It’d be better to just grab the packets as they come and play them, even if they have minor errors. Packets are small enough that if one packet doesn’t arrive, you can ignore it and move on to the next without too much quality disruption. A protocol called UDP (user datagram protocol) does exactly this, working as a TCP stand-in when you’ve got the need for speed, and are willing to sacrifice quality. If you’ve ever watched a Web video or had a Web-based phone call and the quality got sketchy, it’s probably because there were packet problems, but UDP kept on chugging, making the “get it fast” instead of “get it perfect” trade-off. VoIP: When Phone Calls Are Just Another Internet Application The increasing speed and reliability of the Internet means that applications such as Internet phone calls (referred to as VoIP, or voice over Internet protocol) are becoming more reliable. That doesn’t just mean that Skype becomes a more viable alternative for consumer landline and mobile phone calls; it’s also good news for many businesses, governments, and nonprofits. Many large organizations maintain two networks—one for data and another for POTS (plain old telephone service). Maintaining two networks is expensive, and while conventional phone calls are usually of a higher quality than their Internet counterparts, POTS equipment is also inefficient. Old phone systems use a technology called circuit switching. A “circuit” is a dedicated connection between two entities. When you have a POTS phone call, a circuit is open, dedicating a specific amount of capacity between you and the party on the other end. You’re using that “circuit” regardless of whether you’re talking. Pause between words or put someone on hold, and the circuit is still in use. Anyone who has ever tried to make a phone call at a busy time (say, early morning on Mother’s Day or at midnight on New Year’s Eve) and received an “all circuits are busy” recording has experienced congestion on an inefficient circuit-switched phone network. But unlike circuit-switched counterparts, Internet networks are packet-switched networks, which can be more efficient. Since we can slice conversations up into packets, we can squeeze them into smaller spaces. If there are pauses in a conversation or someone’s on hold, applications don’t hold up the network. And that creates an opportunity to use the network’s available capacity for other users. The trade-off is one that swaps circuit switching’s quality of service (QoS) with packet switching’s efficiency and cost savings. Try to have a VoIP call when there’s too much traffic on a portion of the network and your call quality will drop. But packet switching quality is getting much better. Networking standards are now offering special features, such as “packet prioritization,” that can allow voice packets to gain delivery priority over packets for applications like e-mail, where a slight delay is OK. When voice is digitized, “telephone service” simply becomes another application that sits on top of the Internet, like the Web, e-mail, or FTP. VoIP calls between remote offices can save long distance charges. And when the phone system becomes a computer application, you can do a lot more. Well-implemented VoIP systems allow users’ browsers access to their voice mail inbox, one-click video conferencing and call forwarding, point-and-click conference call setup, and other features, but you’ll still have a phone number, just like with POTS. • What Connects the Routers and Computers? Routers are connected together, either via cables or wirelessly. A cable connecting a computer in a home or office is probably copper (likely what’s usually called an Ethernet cable), with transmissions sent through the copper via electricity. Long-haul cables, those that carry lots of data over long distances, are usually fiber-optic lines—glass lined cables that transmit light (light is faster and travels farther distances than electricity, but fiber-optic networking equipment is more expensive than the copper-electricity kind). Wireless transmission can happen via Wi-Fi (for shorter distances), or cell phone tower or satellite over longer distances. But the beauty of the Internet protocol suite (TCP/IP) is that it doesn’t matter what the actual transmission media are. As long as your routing equipment can connect any two networks, and as long as that equipment “speaks” IP, then you can be part of the Internet. In reality, your messages likely transfer via lots of different transmission media to get to their final destination. If you use a laptop connected via Wi-Fi, then that wireless connection finds a base station, usually within about three hundred feet. That base station is probably connected to a local area network (LAN) via a copper cable. And your firm or college may connect to fast, long-haul portions of the Internet via fiber-optic cables provided by that firm’s Internet service provider (ISP). Most big organizations have multiple ISPs for redundancy, providing multiple paths in and out of a network. This is so that if a network connection provided by one firm goes down, say an errant backhoe cuts a cable, other connections can route around the problem (see Figure 12.1). In the United States (and in most deregulated telecommunications markets), Internet service providers come in all sizes, from smaller regional players to sprawling international firms. When different ISPs connect their networking equipment together to share traffic, it’s called peering. Peering usually takes place at neutral sites called Internet exchange points (IXPs), although some firms also have private peering points. Carriers usually don’t charge one another for peering. Instead, “the money is made” in the ISP business by charging the end-points in a network—the customer organizations and end users that an ISP connects to the Internet. Competition among carriers helps keep prices down, quality high, and innovation moving forward. Finance Has a Need for Speed When many folks think of Wall Street trading, they think of the open outcry pit at the New York Stock Exchange (NYSE). But human traders are just too slow for many of the most active trading firms. Over half of all U.S. stock trades and a quarter of worldwide currency trades now happen via programs that make trading decisions without any human intervention (Timmons, 2006). There are many names for this automated, data-driven frontier of finance—algorithmic trading, black-box trading, or high-frequency trading. And while firms specializing in automated, high-frequency trading represent only about 2 percent of the trading firms operating in the United States, they account for about three quarters of all U.S. equity trading volume (Iati, 2009). Programmers lie at the heart of modern finance. “A geek who writes code—those guys are now the valuable guys” says the former head of markets systems at Fidelity Investments, and that rare breed of top programmer can make “tens of millions of dollars” developing these systems (Berenson, 2009). Such systems leverage data mining and other model-building techniques to crunch massive volumes of data and discover exploitable market patterns. Models are then run against real-time data and executed the instant a trading opportunity is detected. (For more details on how data is gathered and models are built, see Chapter 11 “The Data Asset: Databases, Business Intelligence, and Competitive Advantage”.) Winning with these systems means being quick—very quick. Suffer delay (what techies call latency) and you may have missed your opportunity to pounce on a signal or market imperfection. To cut latency, many trading firms are moving their servers out of their own data centers and into colocation facilities. These facilities act as storage places where a firm’s servers get superfast connections as close to the action as possible. And by renting space in a “colo,” a firm gets someone else to manage the electrical and cooling issues, often providing more robust power backup and lower energy costs than a firm might get on its own. Equinix, a major publicly traded IXP and colocation firm with facilities worldwide, has added a growing number of high-frequency trading firms to a roster of customers that includes e-commerce, Internet, software, and telecom companies. In northern New Jersey alone (the location of many of the servers where “Wall Street” trading takes place), Equinix hosts some eighteen exchanges and trading platforms as well as the NYSE Secure Financial Transaction Infrastructure (SFTI) access node. Less than a decade ago, eighty milliseconds was acceptably low latency, but now trading firms are pushing below one millisecond into microseconds (Schmerken, 2009). So it’s pretty clear that understanding how the Internet works, and how to best exploit it, is of fundamental and strategic importance to those in finance. But also recognize that this kind of automated trading comes with risks. Systems that run on their own can move many billions in the blink of an eye, and the actions of one system may cascade, triggering actions by others. The spring 2010 “Flash Crash” resulted in a nearly 1,000-point freefall in the Dow Jones Industrial Index, it’s biggest intraday drop ever. Those black boxes can be mysterious—weeks after the May 6th event, experts were still parsing through trading records, trying to unearth how the flash crash happened (Daimler & Davis, 2010). Regulators and lawmakers recognize they now need to understand technology, telecommunications, and its broader impact on society so that they can create platforms that fuel growth without putting the economy at risk. Watching the Packet Path via Traceroute Want to see how packets bounce from router to router as they travel around the Internet? Check out a tool called traceroute. Traceroute repeatedly sends a cluster of three packets starting at the first router connected to a computer, then the next, and so on, building out the path that packets take to their destination. Traceroute is built into all major desktop operating systems (Windows, Macs, Linux), and several Web sites will run traceroute between locations (traceroute.org and visualroute.visualware.com are great places to explore). The message below shows a traceroute performed between Irish firm VistaTEC and Boston College. At first, it looks like a bunch of gibberish, but if we look closely, we can decipher what’s going on. Figure 12.5 The table above shows ten hops, starting at a domain in vistatec.ie and ending in 136.167.9.226 (the table doesn’t say this, but all IP addresses starting with 136.167 are Boston College addresses). The three groups of numbers at the end of three lines shows the time (in milliseconds) of three packets sent out to test that hop of our journey. These numbers might be interesting for network administrators trying to diagnose speed issues, but we’ll ignore them and focus on how packets get from point to point. At the start of each line is the name of the computer or router that is relaying packets for that leg of the journey. Sometimes routers are named, and sometimes they’re just IP addresses. When routers are named, we can tell what network a packet is on by looking at the domain name. By looking at the router names to the left of each line in the traceroute above, we see that the first two hops are within the vistatec.ie network. Hop 3 shows the first router outside the vistatec.ie network. It’s at a domain named tinet.net, so this must be the name of VistaTEC’s Internet service provider since it’s the first connection outside the vistatec.ie network. Sometimes routers names suggest their locations (oftentimes they use the same three character abbreviations you’d see in airports). Look closely at the hosts in hops 3 through 7. The subdomains dub20, lon11, lon01, jfk02, and bos01 suggest the packets are going from Dublin, then east to London, then west to New York City (John F. Kennedy International Airport), then north to Boston. That’s a long way to travel in a fraction of a second! Hop 4 is at tinet.net, but hop 5 is at cogentco.com (look them up online and you’ll find out that cogentco.com, like tinet.net, is also an ISP). That suggests that between those hops peering is taking place and traffic is handed off from carrier to carrier. Hop 8 is still cogentco.com, but it’s not clear who the unnamed router in hop 9, 38.104.218.10, belongs to. We can use the Internet to sleuth that out, too. Search the Internet for the phrase “IP address lookup” and you’ll find a bunch of tools to track down the organization that “owns” an IP address. Using the tool at whatismyip.com, I found that this number is registered to PSI Net, which is now part of cogentco.com. Routing paths, ISPs, and peering all revealed via traceroute. You’ve just performed a sort of network “CAT scan” and looked into the veins and arteries that make up a portion of the Internet. Pretty cool! If you try out traceroute on your own, be aware that not all routers and networks are traceroute friendly. It’s possible that as your trace hits some hops along the way (particularly at the start or end of your journey), three “*” characters will show up at the end of each line instead of the numbers indicating packet speed. This indicates that traceroute has timed out on that hop. Some networks block traceroute because hackers have used the tool to probe a network to figure out how to attack an organization. Most of the time, though, the hops between the source and destination of the traceroute (the steps involving all the ISPs and their routers) are visible. Traceroute can be a neat way to explore how the Internet works and reinforce the topics we’ve just learned. Search for traceroute tools online or browse the Internet for details on how to use the traceroute command built into your computer. There’s Another Internet? If you’re a student at a large research university, there’s a good chance that your school is part of Internet2. Internet2 is a research network created by a consortium of research, academic, industry, and government firms. These organizations have collectively set up a high-performance network running at speeds of up to one hundred gigabits per second to support and experiment with demanding applications. Examples include high-quality video conferencing; high-reliability, high-bandwidth imaging for the medical field; and applications that share huge data sets among researchers. If your university is an Internet2 member and you’re communicating with another computer that’s part of the Internet2 consortium, then your organization’s routers are smart enough to route traffic through the superfast Internet2 backbone. If that’s the case, you’re likely already using Internet2 without even knowing it! Key Takeaways • TCP/IP, or the Internet protocol suite, helps get perfect copies of Internet transmissions from one location to another. TCP works on the ends of transmission, breaking up transmissions up into manageable packets at the start and putting them back together while checking quality at the end. IP works in the middle, routing packets to their destination. • Routers are special computing devices that forward packets from one location to the next. Routers are typically connected with more than one outbound path, so in case one path becomes unavailable, an alternate path can be used. • UDP is a replacement for TCP, used when it makes sense to sacrifice packet quality for delivery speed. It’s often used for media streaming. • TCP/IP doesn’t care about the transition media. This allows networks of different types—copper, fiber, and wireless—to connect to and participate in the Internet. • The ability to swap in new applications, protocols, and media files gives the network tremendous flexibility. • Decentralization, fault tolerance, and redundancy help keep the network open and reliable. • VoIP allows voice and phone systems to become an application traveling over the Internet. This is allowing many firms to save money on phone calls and through the elimination of old, inefficient circuit-switched networks. As Internet applications, VoIP phone systems can also have additional features that circuit-switched networks lack. The primary limitation of many VoIP systems is quality of service. • Many firms in the finance industry have developed automated trading models that analyze data and execute trades without human intervention. Speeds substantially less than one second may be vital to capitalizing on market opportunities, so firms are increasingly moving equipment into collocation facilities that provide high-speed connectivity to other trading systems. Questions and Exercises 1. How can the Internet consist of networks of such physically different transmission media—cable, fiber, and wireless? 2. What is the difference between TCP and UDP? Why would you use one over the other? 3. Would you recommend a VoIP phone system to your firm or University? Why or why not? What are the advantages? What are the disadvantages? Can you think of possible concerns or benefits not mentioned in this section? Research these concerns online and share your finding with your instructor. 4. What are the risks in the kinds of automated trading systems described in this section? Conduct research and find an example of where these systems have caused problems for firms and/or the broader market. What can be done to prevent such problems? Whose responsibility is this? 5. Search the Internet for a traceroute tool, or look online to figure out how to use the traceroute command built into your PC. Run three or more traceroutes to different firms at different locations around the world. List the number of ISPs that show up in the trace. Circle the areas where peering occurs. Do some of the “hops” time out with “*” values returned? If so, why do you think that happened? 6. Find out if your school or employer is an Internet2 member. If it is, run traceroutes to schools that are and are not members of Internet2. What differences do you see in the results?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/12%3A_A_Managers_Guide_to_the_Internet_and_Telecommunications/12.03%3A_Getting_Where_Youre_Going.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand the last-mile problem and be able to discuss the pros and cons of various broadband technologies, including DSL, cable, fiber, and various wireless offerings. 2. Describe 3G and 4G systems, listing major technologies and their backers. 3. Understand the issue of Net neutrality and put forth arguments supporting or criticizing the concept. The Internet backbone is made of fiber-optic lines that carry data traffic over long distances. Those lines are pretty speedy. In fact, several backbone providers, including AT&T and Verizon, are rolling out infrastructure with 100 Gbps transmission speeds (that’s enough to transmit a two-hour high-definition [HD] movie in about eight seconds)1 (Spangler, 2010). But when considering overall network speed, remember Amdahl’s Law: a system’s speed is determined by its slowest component (Gilder, 2000). More often than not, the bottleneck isn’t the backbone but the so-called last mile, or the connections that customers use to get online. High-speed last-mile technologies are often referred to as broadband Internet access (or just broadband). What qualifies as broadband varies. In 2009, the Federal Communications Commission (FCC) redefined broadband as having a minimum speed of 768 Kbps (roughly fourteen times the speed of those old 56 Kbps modems). Other agencies worldwide may have different definitions. But one thing is clear: a new generation of bandwidth-demanding services requires more capacity. As we increasingly consume Internet services like HD streaming, real-time gaming, video conferencing, and music downloads, we are in fact becoming a bunch of voracious, bit-craving gluttons. With the pivotal role the United States has played in the creation of the Internet, and in pioneering software, hardware, and telecommunications industries, you might expect the United States to lead the world in last-mile broadband access. Not even close. A recent study ranked the United States twenty-sixth in download speeds, (Lawson, 2010) while others have ranked the United States far behind in speed, availability, and price (Hansell, 2009). Sounds grim, but help is on the way. A range of technologies and firms are upgrading infrastructure and developing new systems that will increase capacity not just in the United States but also worldwide. Here’s an overview of some of the major technologies that can be used to speed the Internet’s last mile. Understanding Bandwidth When folks talk about bandwidth, they’re referring to data transmission speeds. Bandwidth is often expressed in bits per second, or bps. Prefix letters associated with multiples of bps are the same as the prefixes we mentioned in Chapter 5 “Moore’s Law: Fast, Cheap Computing and What It Means for the Manager” when discussing storage capacity in bytes: Kbps = thousand bits (or kilobits) per second, Mbps = million bits (or megabits) per second, Gbps = billion bits (or gigabits) per second (or terabit), and Tbps = trillion bits (or terabits) per second. Remember, there are eight bits in a byte, and one byte is a single character. One megabyte is roughly equivalent to one digital book, forty-five seconds of music, or twenty seconds of medium-quality video (Farzad, 2010). But you can’t just divide the amount of bytes by eight to estimate how many bits you’ll need to transfer. When a file or other transmission is sliced into packets (usually of no more than about 1,500 bytes), there’s some overhead added. Those packets “wrap” data chunks in an envelope surrounded by source and destination addressing and other important information. Here are some rough demand requirements for streaming media. For streaming audio like Pandora, you’d need at least 150 Kbps for acceptable regular quality, and at least 300 Kbps for high quality2. For streaming video (via Netflix), at a minimum you’d need 1.5 Mbps, but 3.0 Mbps will ensure decent video and audio. For what Netflix calls HD streaming, you’ll need a minimum of 5 Mbps, but would likely want 8 Mbps or more to ensure the highest quality video and audio3. • Cable Broadband Roughly 90 percent of U.S. homes are serviced by a cable provider, each capable of using a thick copper wire to offer broadband access. That wire (called a coaxial cable or coax) has shielding that reduces electrical interference, allowing cable signals to travel longer distances without degrading and with less chance of interference than conventional telephone equipment. One potential weakness of cable technology lies in the fact that most residential providers use a system that requires customers to share bandwidth with neighbors. If the guy next door is a BitTorrent-using bandwidth hog, your traffic could suffer (Thompson, 2010). Cable is fast and it’s getting faster. Many cable firms are rolling out a new technology called DOCSIS 3.0 that offers speeds up to and exceeding 50 Mbps (previous high-end speeds were about 16 Mbps and often much less than that). Cable firms are also creating so-called fiber-copper hybrids that run higher-speed fiber-optic lines into neighborhoods, then use lower-cost, but still relatively high-speed, copper infrastructure over short distances to homes (Hansell, 2009). Those are fast networks, but they are also very expensive to build, since cable firms are laying entirely new lines into neighborhoods instead of leveraging the infrastructure that they’ve already got in place. • DSL: Phone Company Copper Digital subscriber line (DSL) technology uses the copper wire the phone company has already run into most homes. Even as customers worldwide are dropping their landline phone numbers, the wires used to provide this infrastructure can still be used for broadband. DSL speeds vary depending on the technology deployed. Worldwide speeds may range from 7 Mbps to as much as 100 Mbps (albeit over very short distances) (Hansell, 2009). The Achilles heel of the technology lies in the fact that DSL uses standard copper telephone wiring. These lines lack the shielding used by cable, so signals begin to degrade the further you are from the connecting equipment in telephone company offices. Speeds drop off significantly at less than two miles from a central office or DSL hub. If you go four miles out, the technology becomes unusable. Some DSL providers are also using a hybrid fiber-copper system, but as with cable’s copper hybrids, this is expensive to build. The superspeedy DSL implementations that are popular in Europe and Asia work because foreign cities are densely populated and so many high-value customers can be accessed over short distances. In South Korea, for example, half the population lives in apartments, and most of those customers live in and around Seoul. This density also impacts costs—since so many people live in apartments, foreign carriers run fewer lines to reach customers, digging up less ground or stringing wires across fewer telephone poles. Their U.S. counterparts by contrast need to reach a customer base sprawled across the suburbs, so U.S. firms have much higher infrastructure costs (Hansell, 2009). There’s another company with copper, electricity-carrying cables coming into your home—the electrical utility. BPL, or broadband over power line, technology has been available for years. However, there are few deployments because it is considered to be pricier and less practical than alternatives (King, 2009). • Fiber: A Light-Filled Glass Pipe to Your Doorstep Fiber to the home (FTTH) is the fastest last-mile technology around. It also works over long distances. Verizon’s FiOS technology boasts 50 Mbps download speeds but has tested network upgrades that increase speeds by over six times that (Higginbotham, 2009). The problem with fiber is that unlike cable or DSL copper, fiber to the home networks weren’t already in place. That means firms had to build their own fiber networks from scratch. The cost of this build out can be enormous. Verizon, for example, has spent over \$23 billion on its FTTH infrastructure. However, most experts think the upgrade was critical. Verizon has copper into millions of homes, but U.S. DSL is uncompetitive. Verizon’s residential landline business was dying as users switch to mobile phone numbers, and while mobile is growing, Verizon Wireless is a joint venture with the United Kingdom’s Vodaphone, not a wholly owned firm. This means it shares wireless unit profits with its partner. With FiOS, Verizon now offers pay television, competing with cable’s core product. It also offers some of the fastest home broadband services anywhere, and it gets to keep everything it earns. In 2010, Google also announced plans to bring fiber to the home. Google deems its effort an experiment—it’s more interested in learning how developers and users take advantage of ultrahigh-speed fiber to the home (e.g., what kinds of apps are created and used, how usage and time spent online change), rather than becoming a nationwide ISP itself. Google says it will investigate ways to build and operate networks less expensively and plans to share findings with others. The Google network will be “open,” allowing other service providers to use Google’s infrastructure to resell services to consumers. The firm has pledged to bring speeds of 1 Gbps at competitive prices to at least 50,000 and potentially as many as 500,000 homes. Over 1,100 U.S. communities applied to be part of the Google experimental fiber network (Ingersoll & Kelly, 2010; Rao, 2010). • Wireless Mobile wireless service from cell phone access providers is delivered via cell towers. While these providers don’t need to build a residential wired infrastructure, they still need to secure space for cell towers, build the towers, connect the towers to a backbone network, and license the wireless spectrum (or airwave frequency space) for transmission. We need more bandwidth for mobile devices, too. AT&T now finds that the top 3 percent of its mobile network users gulp up 40 percent of the network’s capacity (thanks, iPhone users), and network strain will only increase as more people adopt smartphones. These users are streaming Major League Baseball games, exploring the planet with Google Earth, watching YouTube and Netflix, streaming music through Pandora, and more. Get a bunch of iPhone users in a crowded space, like in a college football stadium on game day, and the result is a network-choking data traffic jam. AT&T estimates that it’s not uncommon for 80 percent of game-day iPhone users to take out their phones and surf the Web for stats, snap and upload photos, and more. But cell towers often can’t handle the load (Farzad, 2010). If you’ve ever lost coverage in a crowd, you’ve witnessed mobile network congestion firsthand. Trying to have enough capacity to avoid congestion traffic jams will cost some serious coin. In the midst of customer complaints, AT&T committed to spending \$18 billion on network upgrades to address its wireless capacity problem (Edwards & Kharif, 2010). Table 12.1 Average Demand Usage by Function Usage Demand Voice Calls 4 MB/hr. iPhone Browsing 40–60 MB/hr. Net Radio 60 MB/hr. YouTube 200–400 MB/hr. Conventional mobile phones use an estimated 100 MB/month, iPhones 560 MB/month, and iPads almost 1 GB/month. Source: R. Farzad, “The Truth about Bandwidth,” BusinessWeek, February 3, 2010. We’re in the midst of transitioning from third generation (3G) to fourth generation (4G) wireless networks. 3G systems offer access speeds usually less than 2 Mbps (often a lot less) (German, 2010). While variants of 3G wireless might employ an alphabet soup of technologies—EV-DO (evolution data optimized), UMTS (universal mobile telecommunications systems), and HSDPA (high-speed downlink packet link access) among them—3G standards can be narrowed down to two camps: those based on the dominant worldwide standard called GSM (global system for mobile communications) and the runner-up standards based on CDMA (code division multiplex access). Most of Europe and a good chunk of the rest of the world use GSM. In the United States, AT&T and T-Mobile use GSM-based 3G. Verizon Wireless and Sprint use the CDMA 3G standard. Typically, handsets designed for one network can’t be used on networks supporting the other standard. CDMA has an additional limitation in not being able to use voice and data at the same time. But 3G is being replaced by high-bandwidth 4G (fourth-generation) mobile networks. 4G technologies also fall into two standards camps: LTE (Long Term Evolution) and WiMAX (Worldwide Interoperability for Microwave Access). LTE looks like the global winner. In the United States, every major wireless firm, except for Sprint, is betting on LTE victory. Bandwidth for the service rivals what we’d consider fast cable a few years back. Average speeds range from 5 to 12 Mbps for downloads and 2 to 5 Mbps for upload, although Verizon tests in Boston and Seattle showed download speeds as high as 50 Mbps and upload speeds reaching 25 Mbps (German, 2010). Competing with LTE is WiMAX; don’t confuse it with Wi-Fi. As with other 3G and 4G technologies, WiMAX needs cell towers and operators need to have licensed spectrum from their respective governments (often paying multibillion-dollar fees to do so). Average download and upload speeds should start out at 3–6 Mbps and 1 Mbps, respectively, although this may go much higher (Lee, 2010). WiMAX looks like a particularly attractive option for cable firms, offering them an opportunity to get into the mobile phone business and offer a “quadruple play” of services: pay television, broadband Internet, home phone, and mobile. Comcast and Time Warner have both partnered with Clearwire (a firm majority-owned by Sprint), to gain access to WiMAX-based 4G mobile. 4G could also rewrite the landscape for home broadband competition. If speeds increase, it may be possible for PCs, laptops, and set-top boxes (STB) to connect to the Internet wirelessly via 4G, cutting into DSL, cable, and fiber markets. • Satellite Wireless Wireless systems provided by earth-bound base stations like cell phone towers are referred to as terrestrial wireless, but it is possible to provide telecommunications services via satellite. Early services struggled due to a number of problems. For example, the first residential satellite services were only used for downloads, which still needed a modem or some other connection to send any messages from the computer to the Internet. Many early systems also required large antennas and were quite expensive. Finally, some services were based on satellites in geosynchronous earth orbit (GEO). GEO satellites circle the earth in a fixed, or stationary, orbit above a given spot on the globe, but to do so they must be positioned at a distance that is roughly equivalent to the planet’s circumference. That means signals travel the equivalent of an around-the-world trip to reach the satellite and then the same distance to get to the user. The “last mile” became the last 44,000 miles at best. And if you used a service that also provided satellite upload as well as download, double that to about 88,000 miles. All that distance means higher latency (more delay) (Ou, 2008). A firm named O3b Networks thinks it might have solved the challenges that plagued early pioneers. O3b has an impressive list of big-name backers that include HSBC bank, cable magnate John Malone, European aerospace firm SES, and Google. The name O3b stands for the “Other 3 Billion,” of the world’s population who lack broadband Internet access, and the firm hopes to provide “fiber-quality” wireless service to more than 150 countries, specifically targeting underserved portions of the developing world. These “middle earth orbit” satellites will circle closer to the earth to reduce latency (only about 5,000 miles up, less than one-fourth the distance of GEO systems). To maintain the lower orbit, O3b’s satellites orbit faster than the planet spins, but with plans to launch as many as twenty satellites, the system will constantly blanket regions served. If one satellite circles to the other side of the globe, another one will circle around to take its place, ensuring there’s always an O3b “bird” overhead. Only about 3 percent of the sub-Saharan African population uses the Internet, compared to about 70 percent in the United States. But data rates in the few places served can cost as much as one hundred times the rates of comparable systems in the industrialized world (Lamb, 2008). O3b hopes to change that equation and significantly lower access rates. O3b customers will be local telecommunication firms, not end users. The plan is for local firms to buy O3b’s services wholesale and then resell it to customers alongside rivals who can do the same thing, collectively providing more consumer access, higher quality, and lower prices through competition. O3b is a big, bold, and admittedly risky plan, but if it works, its impact could be tremendous. • Wi-Fi and Other Hotspots Many users access the Internet via Wi-Fi (which stands for wireless fidelity). Computer and mobile devices have Wi-Fi antennas built into their chipsets, but to connect to the Internet, a device needs to be within range of a base station or hotspot. The base station range is usually around three hundred feet (you might get a longer range outdoors and with special equipment; and less range indoors when signals need to pass through solid objects like walls, ceilings, and floors). Wi-Fi base stations used in the home are usually bought by end users, then connected to a cable, DSL, or fiber provider. And now a sort of mobile phone hotspot is being used to overcome limitations in those services, as well. Mobile providers can also be susceptible to poor coverage indoors. That’s because the spectrum used by most mobile phone firms doesn’t travel well through solid objects. Cell coverage is also often limited in the United States because of a lack of towers, which is a result of the NIMBY problem (not in my backyard). People don’t want an eighty-foot to four-hundred-foot unsightly tower clouding their local landscape, even if it will give their neighborhood better cell phone coverage (Dechter & Kharif, 2010). To overcome reception and availability problems, mobile telecom services firms have begun offering fentocells. These devices are usually smaller than a box of cereal and can sell for \$150 or less (some are free with specific service contracts). Plug a fentocell into a high-speed Internet connection like an in-home cable or fiber service and you can get “five-bar” coverage in a roughly 5,000-square-foot footprint (Mims, 2010). That can be a great solution for someone who has an in-home, high-speed Internet connection, but wants to get phone and mobile data service indoors, too. • Net Neutrality: What’s Fair? Across the world, battle lines are being drawn regarding the topic of Net neutrality. Net neutrality is the principle that all Internet traffic should be treated equally (Honan, 2008). Sometimes access providers have wanted to offer varying (some say “discriminatory”) coverage, depending on the service used and bandwidth consumed. But where regulation stands is currently in flux. In a pivotal U.S. case, the FCC ordered Comcast to stop throttling (blocking or slowing down) subscriber access to the peer-to-peer file sharing service BitTorrent. BitTorrent users can consume a huge amount of bandwidth—the service is often used to transfer large files, both legitimate (like version of the Linux operating system) and pirated (HD movies). Then in spring 2010, a federal appeals court moved against the FCC’s position, unanimously ruling that the agency did not have the legal authority to dictate terms to Comcast4. On one side of the debate are Internet service firms, with Google being one of the strongest Net neutrality supporters. In an advocacy paper, Google states, “Just as telephone companies are not permitted to tell consumers who they can call or what they can say, broadband carriers should not be allowed to use their market power to control activity online5.” Many Internet firms also worry that if network providers move away from flat-rate pricing toward usage-based (or metered) schemes, this may limit innovation. Says Google’s Vint Cerf (who is considered one of the “fathers of the Internet” for his work on the original Internet protocol suite) “You are less likely to try things out. No one wants a surprise bill at the end of the month” (Jesdanun, 2009). Metered billing may limit the use of everything from iTunes to Netflix; after all, if you have to pay for per-bit bandwidth consumption as well as for the download service, then it’s as if you’re paying twice. The counterargument is that if firms are restricted from charging more for their investment in infrastructure and services, then they’ll have little incentive to continue to make the kinds of multibillion-dollar investments that innovations like 4G and fiber networks require. Telecom industry executives have railed against Google, Microsoft, Yahoo! and others, calling them free riders who earn huge profits by piggybacking off ISP networks, all while funneling no profits back to the firms that provide the infrastructure. One Verizon vice president said, “The network builders are spending a fortune constructing and maintaining the networks that Google intends to ride on with nothing but cheap servers.…It is enjoying a free lunch that should, by any rational account, be the lunch of the facilities providers” (Mohammed, 2006). AT&T’s previous CEO has suggested that Google, Yahoo! and other services firms should pay for “preferred access” to the firm’s customers. The CEO of Spain’s Telefonica has also said the firm is considering charging Google and other Internet service firms for network use (Lunden, 2010). ISPs also lament the relentlessly increasingly bandwidth demands placed on their networks. Back in 2007, YouTube streamed as much data in three months as the world’s radio, cable, and broadcast television channels combined stream in one year (Swanson, 2007), and YouTube has only continued to grow since then. Should ISPs be required to support the strain of this kind of bandwidth hog? And what if this one application clogs network use for other traffic, such as e-mail or Web surfing? Similarly, shouldn’t firms have the right to prioritize some services to better serve customers? Some network providers argue that services like video chat and streaming audio should get priority over, say, e-mail which can afford slight delay without major impact. In that case, there’s a pretty good argument that providers should be able to discriminate against services. But improving efficiency and throttling usage are two different things. Internet service firms say they create demand for broadband business, broadband firms say Google and allies are ungrateful parasites that aren’t sharing the wealth. The battle lines on the Net neutrality frontier continue to be drawn, and the eventual outcome will impact consumers, investors, and will likely influence the continued expansion and innovation of the Internet. • Summing Up Hopefully, this chapter helped reveal the mysteries of the Internet. It’s interesting to know how “the cloud” works but it can also be vital. As we’ve seen, the executive office in financial services firms considers mastery of the Internet infrastructure to be critically important to their competitive advantage. Media firms find the Internet both threatening and empowering. The advancement of last-mile technologies and issues of Net neutrality will expose threats and create opportunity. And a manager who knows how the Internet works will be in a better position to make decisions about how to keep the firm and its customers safe and secure, and be better prepared to brainstorm ideas for winning in a world where access is faster and cheaper, and firms, rivals, partners, and customers are more connected. Key Takeaways • The slowest part of the Internet is typically the last mile, not the backbone. While several technologies can offer broadband service over the last mile, the United States continues to rank below many other nations in terms of access speed, availability, and price. • Cable firms and phone companies can leverage existing wiring for cable broadband and DSL service, respectively. Cable services are often criticized for shared bandwidth. DSL’s primary limitation is that it only works within a short distance of telephone office equipment. • Fiber to the home can be very fast but very expensive to build. • An explosion of high-bandwidth mobile applications is straining 3G networks. 4G systems may alleviate congestion by increasing capacities to near-cable speeds. Fentocells are another technology that can improve service by providing a personal mobile phone hotspot that can plug into in-home broadband access. • The two major 3G standards (popularly referred to as GSM and CDMA) will be replaced by two unrelated 4G standards (LTE and WiMAX). GSM has been the dominant 3G technology worldwide. LTE looks like it will be the leading 4G technology. • Satellite systems show promise in providing high-speed access to underserved parts of the world, but few satellite broadband providers have been successful so far. • Net neutrality is the principle that all Internet traffic should be treated equally. Google and other firms say it is vital to maintain the openness of the Internet. Telecommunications firms say they should be able to limit access to services that overtax their networks, and some have suggested charging Google and other Internet firms for providing access to their customers. Questions and Exercises 1. Research online for the latest country rankings for broadband service. Where does the United States currently rank? Why? 2. Which broadband providers can service your home? Which would you choose? Why? 3. Research the status of Google’s experimental fiber network. Report updated findings to your class. Why do you suppose Google would run this “experiment”? What other Internet access experiments has the firm been involved in? 4. Show your understanding of the economics and competitive forces of the telecom industry. Discuss why Verizon chose to go with fiber. Do you think this was a wise decision or not? Why? Feel free to do additional research to back up your argument. 5. Why have other nations enjoyed faster broadband speeds, greater availability, and lower prices? 6. The iPhone has been called both a blessing and a curse for AT&T. Why do you suppose this is so? 7. Investigate the status of mobile wireless offerings (3G and 4G). Which firm would you choose? Why? Which factors are most important in your decision? 8. Name the two dominant 3G standards. What are the differences between the two? Which firms in your nation support each standard? 9. Name the two dominant 4G standards. Which firms in your nation will support the respective standards? 10. Have you ever lost communication access—wirelessly or via wired connection? What caused the loss or outage? 11. What factors shape the profitability of the mobile wireless provider industry? How do these economics compare with the cable and wire line industry? Who are the major players and which would you invest in? Why? 12. Last-mile providers often advertise very fast speeds, but users rarely see speeds as high as advertised rates. Search online to find a network speed test and try it from your home, office, mobile device, or dorm. How fast is the network? If you’re able to test from home, what bandwidth rates does your ISP advertise? Does this differ from what you experienced? What could account for this discrepancy? 13. How can 4G technology help cable firms? Why might it hurt them? 14. What’s the difference between LEO satellite systems and the type of system used by O3b? What are the pros and cons of these efforts? Conduct some additional research. What is the status of O3b and other satellite broadband efforts? 15. What advantages could broadband offer to underserved areas of the world? Is Internet access important for economic development? Why or why not? 16. Does your carrier offer a fentocell? Would you use one? Why or why not? 17. Be prepared to debate the issue of Net neutrality in class. Prepare positions both supporting and opposing Net neutrality. Which do you support and why? 18. Investigate the status of Net neutrality laws in your nation and report your findings to your instructor. Do you agree with the stance currently taken by your government? Why or why not?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/12%3A_A_Managers_Guide_to_the_Internet_and_Telecommunications/12.04%3A_Last_Mile-_Faster_Speed_Broader_Access.txt
Learning Objectives After studying this section you should be able to do the following: 1. Recognize that information security breaches are on the rise. 2. Understand the potentially damaging impact of security breaches. 3. Recognize that information security must be made a top organizational priority. Sitting in the parking lot of a Minneapolis Marshalls, a hacker armed with a laptop and a telescope-shaped antenna infiltrated the store’s network via an insecure Wi-Fi base station1. The attack launched what would become a billion-dollar-plus nightmare scenario for TJX, the parent of retail chains that include Marshalls, Home Goods, and T. J. Maxx. Over a period of several months, the hacker and his gang stole at least 45.7 million credit and debit card numbers and pilfered driver’s licenses and other private information from an additional 450,000 customers (King, 2009). TJX, at the time a \$17.5 billion Fortune 500 firm, was left reeling from the incident. The attack deeply damaged the firm’s reputation. It burdened customers and banking partners with the time and cost of reissuing credit cards. And TJX suffered under settlement costs, payouts from court-imposed restitution, legal fees, and more. The firm estimated that it spent more than \$150 million to correct security problems and settle with consumers affected by the breach, and that was just the tip of the iceberg. Estimates peg TJX’s overall losses from this incident at between \$1.35 billion and \$4.5 billion (Matwyshyn, 2009). A number of factors led to and amplified the severity of the TJX breach. There was a personnel betrayal: the mastermind was an alleged FBI informant who previously helped bring down a massive credit card theft scheme but then double-crossed the Feds and used insider information to help his gang outsmart the law and carry out subsequent hacks (Goldman, 2009). There was a technology lapse: TJX made itself an easy mark by using WEP, a wireless security technology less secure than the stuff many consumers use in their homes—one known for years to be trivially compromised by the kind of “drive-by” hacking initiated by the perpetrators. And there was a procedural gaffe: retailers were in the process of rolling out a security rubric known as the Payment Card Industry Data Security Standard. Despite an industry deadline, however, TJX had requested and received an extension, delaying the rollout of mechanisms that might have discovered and plugged the hole before the hackers got in (Anthes, 2008). The massive impact of the TJX breach should make it clear that security must be a top organizational priority. Attacks are on the rise. In 2008, more electronic records were breached than in the previous four years combined (King, 2009). While the examples and scenarios presented here are shocking, the good news is that the vast majority of security breaches can be prevented. Let’s be clear from the start: no text can provide an approach that will guarantee that you’ll be 100 percent secure. And that’s not the goal of this chapter. The issues raised in this brief introduction can, however, help make you aware of vulnerabilities; improve your critical thinking regarding current and future security issues; and help you consider whether a firm has technologies, training, policies, and procedures in place to assess risks, lessen the likelihood of damage, and respond in the event of a breach. A constant vigilance regarding security needs to be part of your individual skill set and a key component in your organization’s culture. An awareness of the threats and approaches discussed in this chapter should help reduce your chance of becoming a victim. As we examine security issues, we’ll first need to understand what’s happening, who’s doing it, and what their motivation is. We’ll then examine how these breaches are happening with a focus on technologies and procedures. Finally, we’ll sum up with what can be done to minimize the risks of being victimized and quell potential damage of a breach for both the individual and the organization. Key Takeaways • Information security is everyone’s business and needs to be made a top organizational priority. • Firms suffering a security breach can experience direct financial loss, exposed proprietary information, fines, legal payouts, court costs, damaged reputations, plummeting stock prices, and more. • Information security isn’t just a technology problem; a host of personnel and procedural factors can create and amplify a firm’s vulnerability. Questions and Exercises 1. As individuals or in groups assigned by your instructor, search online for recent reports on information security breaches. Come to class prepared to discuss the breach, its potential impact, and how it might have been avoided. What should the key takeaways be for managers studying your example? 2. Think of firms that you’ve done business with online. Search to see if these firms have experienced security breaches in the past. What have you found out? Does this change your attitude about dealing with the firm? Why or why not? 3. What factors were responsible for the TJX breach? Who was responsible for the breach? How do you think the firm should have responded? 1Particular thanks goes to my Boston College colleague, Professor Sam Ransbotham, whose advice, guidance, and suggestions were invaluable in creating this chapter. Any errors or omissions are entirely my own.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/13%3A_Information_Security-_Barbarians_at_the_Gateway_(and_Just_About_Everywhere_Else)/13.01%3A_Introduction.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand the source and motivation of those initiating information security attacks. 2. Relate examples of various infiltrations in a way that helps raise organizational awareness of threats. Thieves, vandals, and other bad guys have always existed, but the environment has changed. Today, nearly every organization is online, making any Internet-connected network a potential entry point for the growing worldwide community of computer criminals. Software and hardware solutions are also more complex than ever. Different vendors, each with their own potential weaknesses, provide technology components that may be compromised by misuse, misconfiguration, or mismanagement. Corporations have become data packrats, hoarding information in hopes of turning bits into bucks by licensing databases, targeting advertisements, or cross-selling products. And flatter organizations also mean that lower-level employees may be able to use technology to reach deep into corporate assets—amplifying threats from operator error, a renegade employee, or one compromised by external forces. There are a lot of bad guys out there, and motivations vary widely, including the following: • Account theft and illegal funds transfer • Stealing personal or financial data • Compromising computing assets for use in other crimes • Extortion • Espionage • Cyberwarfare • Terrorism • Pranksters • Protest hacking (hacktivism) • Revenge (disgruntled employees) Criminals have stolen more than \$100 million from U.S. banks in the first three quarters of 2009, and they did it “without drawing a gun or passing a note to a teller” (Kroft, 2009). While some steal cash for their own use, other resell their hacking take to others. There is a thriving cybercrime underworld market in which data harvesters sell to cash-out fraudsters: criminals who might purchase data from the harvesters in order to buy (then resell) goods using stolen credit cards or create false accounts via identity theft. These collection and resale operations are efficient and sophisticated. Law enforcement has taken down sites like DarkMarket and ShadowCrew, in which card thieves and hacking tool peddlers received eBay-style seller ratings vouching for the “quality” of their wares (Singel, 2008). Hackers might also infiltrate computer systems to enlist hardware for subsequent illegal acts. A cybercrook might deliberately hop through several systems to make his path difficult to follow, slowing cross-border legal pursuit or even thwarting prosecution if launched from nations without extradition agreements. In fact, your computer may be up for rent by cyber thieves right now. Botnets of zombie computers (networks of infiltrated and compromised machines controlled by a central command) are used for all sorts of nefarious activity. This includes sending spam from thousands of difficult-to-shut-down accounts, launching tough-to-track click fraud efforts or staging what’s known as distributed denial of service (DDoS) attacks (effectively shutting down Web sites by overwhelming them with a crushing load of seemingly legitimate requests sent simultaneously by thousands of machines). Botnets have been discovered that are capable of sending out 100 billion spam messages a day (Higgins, 2008), and botnets as large as 10 million zombies have been identified. Such systems theoretically control more computing power than the world’s fastest supercomputers (Krebs, 2007). Extortionists might leverage botnets or hacked data to demand payment to avoid retribution. Three eastern European gangsters used a botnet and threatened DDoS to extort \$4 million from UK sports bookmakers1, while an extortion plot against the state of Virginia threatened to reveal names, Social Security numbers, and prescription information stolen from a medical records database (Kroft, 2009). Competition has also lowered the price to inflict such pain. BusinessWeek reports that the cost of renting out ten thousand machines, enough to cripple a site like Twitter, has tumbled to just \$200 a day (Schectman, 2009). Corporate espionage might be performed by insiders, rivals, or even foreign governments. Gary Min, a scientist working for DuPont, was busted when he tried to sell information valued at some \$400 million, including R&D documents and secret data on proprietary products (Vijayan, 2007). Spies also breached the \$300 billion U.S. Joint Strike Fighter project, siphoning off terabytes of data on navigation and other electronics systems (Gorman, et. al., 2009). Cyberwarfare has become a legitimate threat, with several attacks demonstrating how devastating technology disruptions by terrorists or a foreign power might be. Brazil has seen hacks that cut off power to millions. The 60 Minutes news program showed a demonstration by “white hat” hackers that could compromise a key component in an oil refinery, force it to overheat, and cause an explosion. Taking out key components of the vulnerable U.S. power grid may be particularly devastating, as the equipment is expensive, much of it is no longer made in the United States, and some components may take three to four months to replace (Kroft, 2009). “Hacker”: Good or Bad? The terms hacker and hack are widely used, but their meaning is often based on context. When referring to security issues, the media widely refers to hackers as bad guys who try to break into (hack) computer systems. Some geezer geeks object to this use, as the term hack in computer circles originally referred to a clever (often technical) solution and the term hacker referred to a particularly skilled programmer. Expect to see the terms used both positively and negatively. You might also encounter the terms white hat hackers and black hat hackers. The white hats are the good guys who probe for weaknesses, but don’t exploit them. Instead, they share their knowledge in hopes that the holes they’ve found will be plugged and security will be improved. Many firms hire consultants to conduct “white hat” hacking expeditions on their own assets as part of their auditing and security process. “Black hats” are the bad guys. Some call them “crackers.” There’s even a well-known series of hacker conventions known as the Black Hat conference. Other threats come from malicious pranksters, like the group that posted seizure-inducing images on Web sites frequented by epilepsy sufferers (Schwartz, 2008). Others are hacktivists, targeting firms, Web sites, or even users as a protest measure. In 2009, Twitter was brought down and Facebook and LiveJournal were hobbled as Russian-sympathizing hacktivists targeted the social networking and blog accounts of the Georgian blogger known as Cyxymu. The silencing of millions of accounts was simply collateral damage in a massive DDoS attack meant to mute this single critic of the Russian government (Schectman, 2009). And as power and responsibility is concentrated in the hands of a few revenge-seeking employees can do great damage. The San Francisco city government lost control of a large portion of its own computer network over a ten-day period when a single disgruntled employee refused to divulge critical passwords Vijayan, 2010). The bad guys are legion and the good guys often seem outmatched and underresourced. Law enforcement agencies dealing with computer crime are increasingly outnumbered, outskilled, and underfunded. Many agencies are staffed with technically weak personnel who were trained in a prior era’s crime fighting techniques. Governments can rarely match the pay scale and stock bonuses offered by private industry. Organized crime networks now have their own R&D labs and are engaged in sophisticated development efforts to piece together methods to thwart current security measures. Key Takeaways • Computer security threats have moved beyond the curious teen with a PC and are now sourced from a number of motivations, including theft, leveraging compromised computing assets, extortion, espionage, warfare, terrorism, pranks, protest, and revenge. • Threats can come from both within the firm as well as from the outside. • Cybercriminals operate in an increasingly sophisticated ecosystem where data harvesters and tool peddlers leverage sophisticated online markets to sell to cash-out fraudsters and other crooks. • Technical and legal complexity make pursuit and prosecution difficult. • Many law enforcement agencies are underfunded, underresourced, and underskilled to deal with the growing hacker threat. Questions and Exercises 1. What is a botnet? What sorts of exploits would use a botnet? Why would a botnet be useful to cybercriminals? 2. Why are threats to the power grid potentially so concerning? What are the implications of power-grid failure and of property damage? Who might execute these kinds of attacks? What are the implications for firms and governments planning for the possibility of cyberwarfare and cyberterror? 3. Scan the trade press for examples of hacking that apply to the various motivations mentioned in this chapter. What happened to the hacker? Were they caught? What penalties do they face? 4. Why do cybercriminals execute attacks across national borders? What are the implications for pursuit, prosecution, and law enforcement? 5. Why do law enforcement agencies struggle to cope with computer crime? 6. A single rogue employee effectively held the city of San Francisco’s network hostage for ten days. What processes or controls might the city have created that could have prevented this kind of situation from taking place? 1Trend Micro, “Web Threats Whitepaper,” March 2008. 13.03: Where Are Vulnerabilities Understanding the W Learning Objectives After studying this section you should be able to do the following: 1. Recognize the potential entry points for security compromise. 2. Understand infiltration techniques such as social engineering, phishing, malware, Web site compromises (such as SQL injection), and more. 3. Identify various methods and techniques to thwart infiltration. Figure 13.1 This diagram shows only some of the potential weaknesses that can compromise the security of an organization’s information systems. Every physical or network “touch point” is a potential vulnerability. Understanding where weaknesses may exist is a vital step toward improved security. Modern information systems have lots of interrelated components and if one of these components fails, there might be a way in to the goodies. This creates a large attack surface for potential infiltration and compromise, as well as one that is simply vulnerable to unintentional damage and disruption.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/13%3A_Information_Security-_Barbarians_at_the_Gateway_(and_Just_About_Everywhere_Else)/13.02%3A_Why_Is_This_Happening_Who_Is_Doing_It_And_Wha.txt
Learning Objectives After studying this section you should be able to do the following: 1. Identify critical steps to improve your individual and organizational information security. 2. Be a tips, tricks, and techniques advocate, helping make your friends, family, colleagues, and organization more secure. 3. Recognize the major information security issues that organizations face, as well as the resources, methods, and approaches that can help make firms more secure. • Taking Action as a User The weakest link in security is often a careless user, so don’t make yourself an easy mark. Once you get a sense of threats, you understand the kinds of precautions you need to take. Security considerations then become more common sense than high tech. Here’s a brief list of major issues to consider: • Surf smart. Think before you click—question links, enclosures, download request, and the integrity of Web sites that you visit. Avoid suspicious e-mail attachments and Internet downloads. Be on guard for phishing, and other attempts to con you into letting in malware. Verify anything that looks suspicious before acting. Avoid using public machines (libraries, coffee shops) when accessing sites that contain your financial data or other confidential information. • Stay vigilant. Social engineering con artists and rogue insiders are out there. An appropriate level of questioning applies not only to computer use, but also to personal interactions, be it in person, on the phone, or electronically. • Stay updated. Turn on software update features for your operating system and any application you use (browsers, applications, plug-ins, and applets), and manually check for updates when needed. Malware toolkits specifically scan for older, vulnerable systems, so working with updated programs that address prior concerns lowers your vulnerable attack surface. • Stay armed. Install a full suite of security software. Many vendors offer a combination of products that provide antivirus software that blocks infection, personal firewalls that repel unwanted intrusion, malware scanners that seek out bad code that might already be nesting on your PC, antiphishing software that identifies if you’re visiting questionable Web sites, and more. Such tools are increasingly being built into operating systems, browsers, and are deployed at the ISP or service provider (e-mail firm, social network) level. But every consumer should make it a priority to understand the state of the art for personal protection. In the way that you regularly balance your investment portfolio to account for economic shifts, or take your car in for an oil change to keep it in top running condition, make it a priority to periodically scan the major trade press or end-user computing sites for reviews and commentary on the latest tools and techniques for protecting yourself (and your firm). • Be settings smart. Don’t turn on risky settings like unrestricted folder sharing that may act as an invitation for hackers to drop off malware payloads. Secure home networks with password protection and a firewall. Encrypt hard drives—especially on laptops or other devices that might be lost or stolen. Register mobile devices for location identification or remote wiping. Don’t click the “Remember me” or “Save password” settings on public machines, or any device that might be shared or accessed by others. Similarly, if your machine might be used by others, turn off browser settings that auto-fill fields with prior entries—otherwise you make it easy for someone to use that machine to track your entries and impersonate you. And when using public hotspots, be sure to turn on your VPN software to encrypt transmission and hide from network eavesdroppers. • Be password savvy. Change the default password on any new products that you install. Update your passwords regularly. Using guidelines outlined earlier, choose passwords that are tough to guess, but easy for you (and only you) to remember. Federate your passwords so that you’re not using the same access codes for your most secure sites. Never save passwords in nonsecured files, e-mail, or written down in easily accessed locations. • Be disposal smart. Shred personal documents. Wipe hard drives with an industrial strength software tool before recycling, donating, or throwing away—remember in many cases “deleted” files can still be recovered. Destroy media such as CDs and DVDs that may contain sensitive information. Erase USB drives when they are no longer needed. • Back up. The most likely threat to your data doesn’t come from hackers; it comes from hardware failure (Taylor, 2009). Yet most users still don’t regularly back up their systems. This is another do-it-now priority. Cheap, plug-in hard drives work with most modern operating systems to provide continual backups, allowing for quick rollback to earlier versions if you’ve accidentally ruined some vital work. And services like EMC’s Mozy provide monthly, unlimited backup over the Internet for less than what you probably spent on your last lunch (a fire, theft, or similar event could also result in the loss of any backups stored on-site, but Internet backup services can provide off-site storage and access if disaster strikes). • Check with your administrator. All organizations that help you connect to the Internet—your ISP, firm, or school—should have security pages. Many provide free security software tools. Use them as resources. Remember—it’s in their interest to keep you safe, too! • Frameworks, Standards, and Compliance Developing organizational security is a daunting task. You’re in an arms race with adversaries that are tenacious and constantly on the lookout for new exploits. Fortunately, no firm is starting from scratch—others have gone before you and many have worked together to create published best practices. There are several frameworks, but perhaps the best known of these efforts comes from the International Organization for Standards (ISO), and is broadly referred to as ISO27k or the ISO 27000 series. According to ISO.org, this evolving set of standards provides “a model for establishing, implementing, operating, monitoring, reviewing, maintaining, and improving an Information Security Management System.” Firms may also face compliance requirements—legal or professionally binding steps that must be taken. Failure to do so could result in fine, sanction, and other punitive measures. At the federal level, examples include HIPAA (the Health Insurance Portability and Accountability Act), which regulates health data; the Graham-Leach-Bliley Act, which regulates financial data; and the Children’s Online Privacy Protection Act, which regulates data collection on minors. U.S. government agencies must also comply with FISMA (the Federal Information Security Management Act), and there are several initiatives at the other government levels. By 2009, some level of state data breach laws had been passed by over thirty states, while multinationals face a growing number of statues throughout the world. Your legal team and trade associations can help you understand your domestic and international obligations. Fortunately, there are often frameworks and guidelines to assist in compliance. For example, the ISO standards include subsets targeted at the telecommunications and health care industries, and major credit card firms have created the PCI (payment card industry) standards. And there are skilled consulting professionals who can help bring firms up to speed in these areas, and help expand their organizational radar as new issues develop. Here is a word of warning on frameworks and standards: compliance does not equal security. Outsourcing portions security efforts without a complete, organizational commitment to being secure can also be dangerous. Some organizations simply approach compliance as a necessary evil: a sort of checklist that can reduce the likelihood of a lawsuit or other punitive measure (Davis, 2009). While you want to make sure you’re doing everything in your power not to get sued, this isn’t the goal. The goal is taking all appropriate measures to ensure that your firm is secure for your customers, employees, shareholders, and others. Frameworks help shape your thinking and expose things you should do, but security doesn’t stop there—this is a constant, evolving process that needs to pervade the organization from the CEO suite and board, down to front line workers and potentially out to customers and partners. And be aware of the security issues associated with any mergers and acquisitions. Bringing in new firms, employees, technologies, and procedures means reassessing the security environment for all players involved. The Heartland Breach On inauguration day 2009, credit card processor Heartland announced that it had experienced what was one of the largest security breaches in history. The Princeton, New Jersey, based firm was, at the time, the nation’s fifth largest payments processor. Its business was responsible for handling the transfer of funds and information between retailers and cardholders’ financial institutions. That means infiltrating Heartland was like breaking into Fort Knox. It’s been estimated that as many as 100 million cards issued by more than 650 financial services companies may have been compromised during the Heartland breach. Said the firm’s CEO, this was “the worst thing that can happen to a payments company and it happened to us” (King, 2009). Wall Street noticed. The firm’s stock tanked—within a month, its market capitalization had plummeted over 75 percent, dropping over half a billion dollars in value (Claburn, 2009). The Heartland case provides a cautionary warning against thinking that security ends with compliance. Heartland had in fact passed multiple audits, including one conducted the month before the infiltration began. Still, at least thirteen pieces of malware were uncovered on the firm’s servers. Compliance does not equal security. Heartland was complaint, but a firm can be compliant and not be secure. Compliance is not the goal, security is. Since the breach, the firm’s executives have championed industry efforts to expand security practices, including encrypting card information at the point it is swiped and keeping it secure through settlement. Such “cradle-to-grave” encryption can help create an environment where even compromised networking equipment or intercepting relay systems wouldn’t be able to grab codes (Claburn, 2009; King, 2009). Recognize that security is a continual process, it is never done, and firms need to pursue security with tenacity and commitment. • Education, Audit, and Enforcement Security is as much about people, process, and policy, as it is about technology. From a people perspective, the security function requires multiple levels of expertise. Operations employees are involved in the day-to-day monitoring of existing systems. A group’s R&D function is involved in understanding emerging threats and reviewing, selecting, and implementing updated security techniques. A team must also work on broader governance issues. These efforts should include representatives from specialized security and broader technology and infrastructure functions. It should also include representatives from general counsel, audit, public relations, and human resources. What this means is that even if you’re a nontechnical staffer, you may be brought in to help a firm deal with security issues. Processes and policies will include education and awareness—this is also everyone’s business. As the Vice President of Product Development at security firm Symantec puts it, “We do products really well, but the next step is education. We can’t keep the Internet safe with antivirus software alone” (Goldman, 2009). Companies should approach information security as a part of their “collective corporate responsibility…regardless of whether regulation requires them to do so1.” For a lesson in how important education is, look no further than the head of the CIA. Former U.S. Director of Intelligence John Deutch engaged in shockingly loose behavior with digital secrets, including keeping a daily journal of classified information—some 1,000+ pages—on memory cards he’d transport in his shirt pocket. He also downloaded and stored Pentagon information, including details of covert operations, at home on computers that his family used for routine Internet access (Lewis, 2000). Employees need to know a firm’s policies, be regularly trained, and understand that they will face strict penalties if they fail to meet their obligations. Policies without eyes (audit) and teeth (enforcement) won’t be taken seriously. Audits include real-time monitoring of usage (e.g., who’s accessing what, from where, how, and why; sound the alarm if an anomaly is detected), announced audits, and surprise spot checks. This function might also stage white hat demonstration attacks—attempts to hunt for and expose weaknesses, hopefully before hackers find them. Frameworks offer guidelines on auditing, but a recent survey found most organizations don’t document enforcement procedures in their information security policies, that more than one-third do not audit or monitor user compliance with security policies, and that only 48 percent annually measure and review the effectiveness of security policies (Matwyshyn, 2009). A firm’s technology development and deployment processes must also integrate with the security team to ensure that from the start, applications, databases, and other systems are implemented with security in mind. The team will have specialized skills and monitor the latest threats and are able to advise on precautions necessary to be sure systems aren’t compromised during installation, development, testing, and deployment. • What Needs to Be Protected and How Much Is Enough? A worldwide study by PricewaterhouseCoopers and Chief Security Officer magazine revealed that most firms don’t even know what they need to protect. Only 33 percent of executives responded that their organizations kept accurate inventory of the locations and jurisdictions where data was stored, and only 24 percent kept inventory of all third parties using their customer data (Matwyshyn, 2009). What this means is that most firms don’t even have an accurate read on where their valuables are kept, let alone how to protect them. So information security should start with an inventory-style auditing and risk assessment. Technologies map back to specific business risks. What do we need to protect? What are we afraid might happen? And how do we protect it? Security is an economic problem, involving attack likelihood, costs, and prevention benefits. These are complex trade-offs that must consider losses from theft or resources, systems damage, data loss, disclosure of proprietary information, recovery, downtime, stock price declines, legal fees, government and compliance penalties, and intangibles such as damaged firm reputation, loss of customer and partner confidence, industry damage, promotion of adversary, and encouragement of future attacks. While many firms skimp on security, firms also don’t want to misspend, targeting exploits that aren’t likely, while underinvesting in easily prevented methods to thwart common infiltration techniques. Hacker conventions like DefCon can show some really wild exploits. But it’s up to the firm to assess how vulnerable it is to these various risks. The local donut shop has far different needs than a military installation, law enforcement agency, financial institution, or firm housing other high-value electronic assets. A skilled risk assessment team will consider these vulnerabilities and what sort of countermeasure investments should take place. Economic decisions usually drive hacker behavior, too. While in some cases attacks are based on vendetta or personal reasons, in most cases exploit economics largely boils down to Adversary ROI = Asset value to adversary – Adversary cost. An adversary’s costs include not only the resources, knowledge, and technology required for the exploit, but also the risk of getting caught. Make things tough to get at, and lobbying for legislation that imposes severe penalties on crooks can help raise adversary costs and lower your likelihood of becoming a victim. • Technology’s Role Technical solutions often involve industrial strength variants of the previously discussed issues individuals can employ, so your awareness is already high. Additionally, an organization’s approach will often leverage multiple layers of protection and incorporate a wide variety of protective measures. Patch. Firms must be especially vigilant to pay attention to security bulletins and install software updates that plug existing holes, (often referred to as patches). Firms that don’t plug known problems will be vulnerable to trivial and automated attacks. Unfortunately, many firms aren’t updating all components of their systems with consistent attention. With operating systems automating security update installations, hackers have moved on to application targets. But a major study recently found that organizations took at least twice as long to patch application vulnerabilities as they take to patch operating system holes (Wildstrom, 2009). And remember, software isn’t limited to conventional PCs and servers. Embedded systems abound, and connected, yet unpatched devices are vulnerable. Malware has infected everything from unprotected ATM machines (Lilly, 2009) to restaurant point-of-sale systems (McMillan, 2009) to fighter plane navigation systems (Matyszczyk, 2009). As an example of unpatched vulnerabilities, consider the DNS cache poisoning exploit described earlier in this chapter. The discovery of this weakness was one of the biggest security stories the year it was discovered, and security experts saw this as a major threat. Teams of programmers worldwide raced to provide fixes for the most widely used versions of DNS software. Yet several months after patches were available, roughly one quarter of all DNS servers were still unpatched and exposed2. To be fair, not all firms delay patches out of negligence. Some organizations have legitimate concerns about testing whether the patch will break their system or whether the new technology contains a change that will cause problems down the road3. And there have been cases where patches themselves have caused problems. Finally, many software updates require that systems be taken down. Firms may have uptime requirements that make immediate patching difficult. But ultimately, unpatched systems are an open door for infiltration. Lock down hardware. Firms range widely in the security regimes used to govern purchase through disposal system use. While some large firms such as Kraft are allowing employees to select their own hardware (Mac or PC, desktop or notebook, iPhone or BlackBerry) (Wingfield, 2009), others issue standard systems that prevent all unapproved software installation and force file saving to hardened, backed-up, scanned, and monitored servers. Firms in especially sensitive industries such as financial services may regularly reimage the hard drive of end-user PCs, completely replacing all the bits on a user’s hard drive with a pristine, current version—effectively wiping out malware that might have previously sneaked onto a user’s PC. Other lock-down methods might disable the boot capability of removable media (a common method for spreading viruses via inserted discs or USBs), prevent Wi-Fi use or require VPN encryption before allowing any network transmissions, and more. The cloud helps here, too. (See Chapter 10 “Software in Flux: Partly Cloudy and Sometimes Free”.) Employers can also require workers to run all of their corporate applications inside a remote desktop where the actual executing hardware and software is elsewhere (likely hosted as a virtual machine session on the organization’s servers), and the user is simply served an image of what is executing remotely. This seals the virtual PC off in a way that can be thoroughly monitored, updated, backed up, and locked down by the firm. In the case of Kraft, executives worried that the firm’s previously restrictive technology policies prevented employees from staying in step with trends. Employees opting into the system must sign an agreement promising they’ll follow mandated security procedures. Still, financial services firms, law offices, health care providers, and others may need to maintain stricter control, for legal and industry compliance reasons. Lock down the network. Network monitoring is a critical part of security, and a host of technical tools can help. Firms employ firewalls to examine traffic as it enters and leaves the network, potentially blocking certain types of access, while permitting approved communication. Intrusion detection systems specifically look for unauthorized behavior, sounding the alarm and potentially taking action if something seems amiss. Some firms deploy honeypots—bogus offerings meant to distract attackers. If attackers take honeypot bait, firms may gain an opportunity to recognize the hacker’s exploits, identify the IP address of intrusion, and take action to block further attacks and alert authorities. Many firms also deploy blacklists—denying the entry or exit of specific IP addresses, products, Internet domains, and other communication restrictions. While blacklists block known bad guys, whitelists are even more restrictive—permitting communication only with approved entities or in an approved manner. These technologies can be applied to network technology, specific applications, screening for certain kinds of apps, malware signatures, and hunting for anomalous patterns. The latter is important, as recent malware has become polymorphic, meaning different versions are created and deployed in a way that their signature, a sort of electronic fingerprint often used to recognize malicious code, is slightly altered. This also helps with zero-day exploits, and in situations where whitelisted Web sites themselves become compromised. Many technical solutions, ranging from network monitoring and response to e-mail screening, are migrating to “the cloud.” This can be a good thing—if network monitoring software immediately shares news of a certain type of attack, defenses might be pushed out to all clients of a firm (the more users, the “smarter” the system can potentially become—again we see the power of network effects in action). Lock down partners. Insist partner firms are compliant, and audit them to ensure this is the case. This includes technology providers and contract firms, as well as value chain participants such as suppliers and distributors. Anyone who touches your network is a potential point of weakness. Many firms will build security expectations and commitments into performance guarantees known as service level agreements (SLAs). Lock down systems. Audit for SQL injection and other application exploits. The security team must constantly scan exploits and then probe its systems to see if it’s susceptible, advising and enforcing action if problems are uncovered. This kind of auditing should occur with all of a firm’s partners. Access controls can also compartmentalize data access on a need-to-know basis. Such tools can not only enforce access privileges, they can help create and monitor audit trails to help verify that systems are not being accessed by the unauthorized, or in suspicious ways. Audit trails are used for deterring, identifying, and investigating these cases. Recording, monitoring, and auditing access allows firms to hunt for patterns of abuse. Logs can detail who, when, and from where assets are accessed. Giveaways of nefarious activity may include access from unfamiliar IP addresses, from nonstandard times, accesses that occur at higher than usual volumes, and so on. Automated alerts can put an account on hold or call in a response team for further observation of the anomaly. Single-sign-on tools can help firms offer employees one very strong password that works across applications, is changed frequently (or managed via hardware cards or mobile phone log-in), and can be altered by password management staff. Multiple administrators should jointly control key systems. Major configuration changes might require approval of multiple staffers, as well as the automatic notification of concerned personnel. And firms should employ a recovery mechanism to regain control in the event that key administrators are incapacitated or uncooperative. This balances security needs with an ability to respond in the event of a crisis. Such a system was not in place in the earlier described case of the rogue IT staffer who held the city of San Francisco’s networks hostage by refusing to give up vital passwords. Have failure and recovery plans. While firms work to prevent infiltration attempts, they should also have provisions in place that plan for the worst. If a compromise has taken place, what needs to be done? Do stolen assets need to be devalued (e.g., accounts terminated, new accounts issued)? What should be done to notify customers and partners, educate them, and advise them through any necessary responses? Who should work with law enforcement and with the media? Do off-site backups or redundant systems need to be activated? Can systems be reliably restored without risking further damage? Best practices are beginning to emerge. While postevent triage is beyond the scope of our introduction, the good news is that firms are now sharing data on breaches. Given the potential negative consequences of a breach, organizations once rarely admitted they’d been compromised. But now many are obligated to do so. And the broad awareness of infiltration both reduces organizational stigma in coming forward, and allows firms and technology providers to share knowledge on the techniques used by cybercrooks. Information security is a complex, continually changing, and vitally important domain. The exploits covered in this chapter seem daunting, and new exploits constantly emerge. But your thinking on key issues should now be broader. Hopefully you’ve now embedded security thinking in your managerial DNA, and you are better prepared to be a savvy system user and a proactive participant working for your firm’s security. Stay safe! Key Takeaways • End users can engage in several steps to improve the information security of themselves and their organizations. These include surfing smart, staying vigilant, updating software and products, using a comprehensive security suite, managing settings and passwords responsibly, backing up, properly disposing of sensitive assets, and seeking education. • Frameworks such as ISO27k can provide a road map to help organizations plan and implement an effective security regime. • Many organizations are bound by security compliance commitments and will face fines and retribution if they fail to meet these commitments. • The use of frameworks and being compliant is not equal to security. Security is a continued process that must be constantly addressed and deeply ingrained in an organization’s culture. • Security is about trade-offs—economic and intangible. Firms need to understand their assets and risks in order to best allocate resources and address needs. • Information security is not simply a technical fix. Education, audit, and enforcement regarding firm policies are critical. The security team is broadly skilled and constantly working to identify and incorporate new technologies and methods into their organizations. Involvement and commitment is essential from the boardroom to frontline workers, and out to customers and partners. Questions and Exercises 1. Visit the security page for your ISP, school, or employer. What techniques do they advocate that we’ve discussed here? Are there any additional techniques mentioned and discussed? What additional provisions do they offer (tools, services) to help keep you informed and secure? 2. What sorts of security regimes are in use at your university, and at firms you’ve worked or interned for? If you don’t have experience with this, ask a friend or relative for their professional experiences. Do you consider these measures to be too restrictive, too lax, or about right? 3. While we’ve discussed the risks in having security that is too lax, what risk does a firm run if its security mechanisms are especially strict? What might a firm give up? What are the consequences of strict end-user security provisions? 4. What risks does a firm face by leaving software unpatched? What risks does it face if it deploys patches as soon as they emerge? How should a firm reconcile these risks? 5. What methods do firms use to ensure the integrity of their software, their hardware, their networks, and their partners? 6. An organization’s password management system represents “the keys to the city.” Describe personnel issues that a firm should be concerned with regarding password administration. How might it address these concerns?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/13%3A_Information_Security-_Barbarians_at_the_Gateway_(and_Just_About_Everywhere_Else)/13.04%3A_Taking_Action.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand the extent of Google’s rapid rise and its size and influence when compared with others in the media industry. 2. Recognize the shift away from traditional advertising media to Internet advertising. 3. Gain insight into the uniqueness and appeal of Google’s corporate culture. Google has been called a one-trick pony (Li, 2009), but as tricks go, it’s got an exquisite one. Google’s “trick” is matchmaking—pairing Internet surfers with advertisers and taking a cut along the way. This cut is substantial—about \$23 billion in 2009. In fact, as Wired’s Steve Levy puts it, Google’s matchmaking capabilities may represent “the most successful business idea in history” (Levy, 2009). For perspective, consider that as a ten-year-old firm, and one that had been public for less than five years, Google had already grown to earn more annual advertising dollars than any U.S. media company. No television network, no magazine group, no newspaper chain brings in more ad bucks than Google. And none is more profitable. While Google’s stated mission is “to organize the world’s information and make it universally accessible and useful,” advertising drives profits and lets the firm offer most of its services for free. Figure 14.1 U.S. Advertising Spending (by selected media) Online advertising represents the only advertising category trending with positive growth. Figures for 2009 and beyond are estimates. Data retrieved via eMarketer.com. 14.02: Understanding Search Learning Objectives After studying this section you should be able to do the following: 1. Understand the mechanics of search, including how Google indexes the Web and ranks its organic search results. 2. Examine the infrastructure that powers Google and how its scale and complexity offer key competitive advantages. Before diving into how the firm makes money, let’s first understand how Google’s core service, search, works. Perform a search (or query) on Google or another search engine, and the results you’ll see are referred to by industry professionals as organic or natural search. Search engines use different algorithms for determining the order of organic search results, but at Google the method is called PageRank (a bit of a play on words, it ranks Web pages, and was initially developed by Google cofounder Larry Page). Google does not accept money for placement of links in organic search results. Instead, PageRank results are a kind of popularity contest. Web pages that have more pages linking to them are ranked higher. Figure 14.4 The query for “Toyota Prius” triggers organic search results, flanked top and right by sponsored link advertisements. The process of improving a page’s organic search results is often referred to as search engine optimization (SEO). SEO has become a critical function for many marketing organizations since if a firm’s pages aren’t near the top of search results, customers may never discover its site. Google is a bit vague about the specifics of precisely how PageRank has been refined, in part because many have tried to game the system. In addition to in-bound links, Google’s organic search results also consider some two hundred other signals, and the firm’s search quality team is relentlessly analyzing user behavior for clues on how to tweak the system to improve accuracy (Levy, 2010). The less scrupulous have tried creating a series of bogus Web sites, all linking back to the pages they’re trying to promote (this is called link fraud, and Google actively works to uncover and shut down such efforts). We do know that links from some Web sites carry more weight than others. For example, links from Web sites that Google deems as “influential,” and links from most “.edu” Web sites, have greater weight in PageRank calculations than links from run-of-the-mill “.com” sites. Spiders and Bots and Crawlers—Oh My! When performing a search via Google or another search engine, you’re not actually searching the Web. What really happens is that the major search engines make what amounts to a copy of the Web, storing and indexing the text of online documents on their own computers. Google’s index considers over one trillion URLs (Wright, 2009). The upper right-hand corner of a Google query shows you just how fast a search can take place (in the example above, rankings from over eight million results containing the term “Toyota Prius” were delivered in less than two tenths of a second). To create these massive indexes, search firms use software to crawl the Web and uncover as much information as they can find. This software is referred to by several different names—software robots, spiders, Web crawlers—but they all pretty much work the same way. In order to make its Web sites visible, every online firm provides a list of all of the public, named servers on its network, known as domain name service (DNS) listings. For example, Yahoo! has different servers that can be found at http://www.yahoo.com, sports.yahoo.com, weather.yahoo.com, finance.yahoo.com, and so on. Spiders start at the first page on every public server and follow every available link, traversing a Web site until all pages are uncovered. Google will crawl frequently updated sites, like those run by news organizations, as often as several times an hour. Rarely updated, less popular sites might only be reindexed every few days. The method used to crawl the Web also means that if a Web site isn’t the first page on a public server, or isn’t linked to from another public page, then it’ll never be found1. Also note that each search engine also offers a page where you can submit your Web site for indexing. While search engines show you what they’ve found on their copy of the Web’s contents; clicking a search result will direct you to the actual Web site, not the copy. But sometimes you’ll click a result only to find that the Web site doesn’t match what the search engine found. This happens if a Web site was updated before your search engine had a chance to reindex the changes. In most cases you can still pull up the search engine’s copy of the page. Just click the “Cached” link below the result (the term cache refers to a temporary storage space used to speed computing tasks). But what if you want the content on your Web site to remain off limits to search engine indexing and caching? Organizations have created a set of standards to stop the spider crawl, and all commercial search engines have agreed to respect these standards. One way is to put a line of HTML code invisibly embedded in a Web site that tells all software robots to stop indexing a page, stop following links on the page, or stop offering old page archives in a cache. Users don’t see this code, but commercial Web crawlers do. For those familiar with HTML code (the language used to describe a Web site), the command to stop Web crawlers from indexing a page, following links, and listing archives of cached pages looks like this: ⟨META NAME=“ROBOTS” CONTENT=“NOINDEX, NOFOLLOW, NOARCHIVE”⟩ There are other techniques to keep the spiders out, too. Web site administrators can add a special file (called robots.txt) that provides similar instructions on how indexing software should treat the Web site. And a lot of content lies inside the “dark Web,” either behind corporate firewalls or inaccessible to those without a user account—think of private Facebook updates no one can see unless they’re your friend—all of that is out of Google’s reach. What’s It Take to Run This Thing? Sergey Brin and Larry Page started Google with just four scavenged computers (Liedtke, 2008). But in a decade, the infrastructure used to power the search sovereign has ballooned to the point where it is now the largest of its kind in the world (Carr, 2006). Google doesn’t disclose the number of servers it uses, but by some estimates, it runs over 1.4 million servers in over a dozen so-called server farms worldwide (Katz, 2009). In 2008, the firm spent \$2.18 billion on capital expenditures, with data centers, servers, and networking equipment eating up the bulk of this cost2. Building massive server farms to index the ever-growing Web is now the cost of admission for any firm wanting to compete in the search market. This is clearly no longer a game for two graduate students working out of a garage. Google’s Container Data Center (click to see video) Take a virtual tour of one of Google’s data centers. The size of this investment not only creates a barrier to entry, it influences industry profitability, with market-leader Google enjoying huge economies of scale. Firms may spend the same amount to build server farms, but if Google has nearly 70 percent of this market (and growing) while Microsoft’s search draws less than one-seventh the traffic, which do you think enjoys the better return on investment? The hardware components that power Google aren’t particularly special. In most cases the firm uses the kind of Intel or AMD processors, low-end hard drives, and RAM chips that you’d find in a desktop PC. These components are housed in rack-mounted servers about 3.5 inches thick, with each server containing two processors, eight memory slots, and two hard drives. In some cases, Google mounts racks of these servers inside standard-sized shipping containers, each with as many as 1,160 servers per box (Shankland, 2009). A given data center may have dozens of these server-filled containers all linked together. Redundancy is the name of the game. Google assumes individual components will regularly fail, but no single failure should interrupt the firm’s operations (making the setup what geeks call fault-tolerant). If something breaks, a technician can easily swap it out with a replacement. Each server farm layout has also been carefully designed with an emphasis on lowering power consumption and cooling requirements. And the firm’s custom software (much of it built upon open source products) allows all this equipment to operate as the world’s largest grid computer. Web search is a task particularly well suited for the massively parallel architecture used by Google and its rivals. For an analogy of how this works, imagine that working alone, you need try to find a particular phrase in a hundred-page document (that’s a one server effort). Next, imagine that you can distribute the task across five thousand people, giving each of them a separate sentence to scan (that’s the multi-server grid). This difference gives you a sense of how search firms use massive numbers of servers and the divide-and-conquer approach of grid computing to quickly find the needles you’re searching for within the Web’s haystack. (For more on grid computing, see Chapter 5 “Moore’s Law: Fast, Cheap Computing and What It Means for the Manager”, and for more on server farms, see Chapter 10 “Software in Flux: Partly Cloudy and Sometimes Free”.) Figure 14.5 The Google Search Appliance is a hardware product that firms can purchase in order to run Google search technology within the privacy and security of an organization’s firewall. Google will even sell you a bit of its technology so that you can run your own little Google in-house without sharing documents with the rest of the world. Google’s line of search appliances are rack-mounted servers that can index documents within a corporation’s Web site, even specifying password and security access on a per-document basis. Selling hardware isn’t a large business for Google, and other vendors offer similar solutions, but search appliances can be vital tools for law firms, investment banks, and other document-rich organizations. Trendspotting with Google Google not only gives you search results, it lets you see aggregate trends in what its users are searching for, and this can yield powerful insights. For example, by tracking search trends for flu symptoms, Google’s Flu Trends Web site can pinpoint outbreaks one to two weeks faster than the Centers for Disease Control and Prevention (Bruce, 2009). Want to go beyond the flu? Google’s Trends, and Insights for Search services allow anyone to explore search trends, breaking out the analysis by region, category (image, news, product), date, and other criteria. Savvy managers can leverage these and similar tools for competitive analysis, comparing a firm, its brands, and its rivals. Figure 14.6 Google Insights for Search can be a useful tool for competitive analysis and trend discovery. The chart above shows a comparison (over a twelve-month period, and geographically) of search interest in the terms Wii, Playstation, and Xbox.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/14%3A_Google-_Search_Online_Advertising_and_Beyond/14.01%3A_Introduction.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand how media consumption habits are shifting. 2. Be able to explain the factors behind the growth and appeal of online advertising. For several years, Internet advertising has been the only major media ad category to show significant growth. There are three factors driving online ad growth trends: (1) increased user time online, (2) improved measurement and accountability, and (3) targeting. American teenagers (as well as the average British, Australian, and New Zealander Web surfer) now spend more time on the Internet than watching television1 (Hendry, 2008)2. They’re reading fewer print publications, and radio listening among the iPod generation is down 30 percent (Tobias, 2009). So advertisers are simply following the market. Online channels also provide advertisers with a way to reach consumers at work—something that was previously much more difficult to do. Many advertisers have also been frustrated by how difficult it’s been to gauge the effectiveness of traditional ad channels such as TV, print, and radio. This frustration is reflected in the old industry saying, “I know that half of my advertising is working—I just don’t know which half.” Well, with the Internet, now you know. While measurement technologies aren’t perfect, advertisers can now count ad impressions (the number of times an ad appears on a Web site), whether a user clicks on an ad, and the product purchases or other Web site activity that comes from those clicks3. And as we’ll see, many online ad payment schemes are directly linked to ad performance. Various technologies and techniques also make it easier for firms to target users based on how likely a person is to respond to an ad. In theory a firm can use targeting to spend marketing dollars only on those users deemed to be its best prospects. Let’s look at a few of these approaches in action. Key Takeaways • There are three reasons driving online ad growth trends: (1) increasing user time online, (2) improved measurement and accountability, and (3) targeting. • Digital media is decreasing time spent through traditional media consumption channels (e.g., radio, TV, newspapers), potentially lowering the audience reach of these old channels and making them less attractive for advertisers. • Measurement techniques allow advertisers to track the performance of their ads—indicating things such as how often an ad is displayed, how often an ad is clicked, where an ad was displayed when it was clicked, and more. Measurement metrics can be linked to payment schemes, improving return on investment (ROI) and accountability compared to many types of conventional advertising. • Advertising ROI can be improved through targeting. Targeting allows a firm to serve ads to specific categories of users, so firms can send ads to groups it is most interested in reaching, and those that are most likely to respond to an effort. Questions and Exercises 1. How does your media time differ from your parents? Does it differ among your older or younger siblings, or other relatives? Which media are you spending more time with? Less time with? 2. Put yourself in the role of a traditional media firm that is seeing its market decline. What might you do to address decline concerns? Have these techniques been attempted by other firms? Do you think they’ve worked well? Why or why not? 3. Put yourself in the role of an advertiser for a product or service that you’re interested in. Is the Internet an attractive channel for you? How might you use the Internet to reach customers you are most interested in? Where might you run ads? Who might you target? Who might you avoid? How might the approach you use differ from traditional campaigns you’d run in print, TV, or radio? How might the size (money spent, attempted audience reach) and timing (length of time run, time between campaigns) of ad campaigns online differ from offline campaigns? 4. List ways in which you or someone you know has been targeted in an Internet ad campaign. Was it successful? How do you feel about targeting? 1“American Teenagers Spend More Time Online Than Watching Television,” MediaWeek, June 19, 2008. 2“Brits Spend More Time Online Than Watching TV,” BigMouthMedia, July 12, 2007. 3For a more detailed overview of the limitations in online ad measurement, see L. Rao, “Guess Which Brand Is Now Worth \$100 Billion?” TechCrunch, April 30, 2009.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/14%3A_Google-_Search_Online_Advertising_and_Beyond/14.03%3A_Understanding_the_Increase_in_Online_Ad_Spending.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand Google’s search advertising revenue model. 2. Know the factors that determine the display and ranking of advertisements appearing on Google’s search results pages. 3. Be able to describe the uses and technologies behind geotargeting. The practice of running and optimizing search engine ad campaigns is referred to as search engine marketing (SEM) (Elliott, 2006). SEM is a hot topic in an increasingly influential field, so it’s worth spending some time learning how search advertising works on the Internet’s largest search engine. Roughly two-thirds of Google’s revenues come from ads served on its own sites, and the vast majority of this revenue comes from search engine ads1. During Google’s early years, the firm actually resisted making money through ads. In fact, while at Stanford, Brin and Page even coauthored a paper titled “The Evils of Advertising” (Vise, 2008). But when Yahoo! and others balked at buying Google’s search technology (offered for as little as \$500,000), Google needed to explore additional revenue streams. It wasn’t until two years after incorporation that Google ran ads alongside organic search results. That first ad, one for “Live Mail Order Lobsters,” appeared just minutes after the firm posted a link reading “See Your Ad Here”) (Levy, 2009).> Google has only recently begun incorporating video and image ads into search. For the most part, the ads you’ll see to the right (and sometimes top) of Google’s organic search results appear as keyword advertising, meaning they’re targeted based on a user’s query. Advertisers bid on the keywords and phrases that they’d like to use to trigger the display of their ad. Linking ads to search was a brilliant move, since the user’s search term indicates an overt interest in a given topic. Want to sell hotel stays in Tahiti? Link your ads to the search term “Tahiti Vacation.” Not only are search ads highly targeted, advertisers only pay for results. Text ads appearing on Google search pages are billed on a pay-per-click (PPC) basis, meaning that advertisers don’t spend a penny unless someone actually clicks on their ad. Note that the term pay-per-click is sometimes used interchangeably with the term cost-per-click (CPC). Not Entirely Google’s Idea Google didn’t invent pay-for-performance search advertising. A firm named GoTo.com (later renamed Overture) pioneered pay-per-click ads and bidding systems and held several key patents governing the technology. Overture provided pay-per-click ad services to both Yahoo! and Microsoft, but it failed to refine and match the killer combination of ad auctions and search technology that made Google a star. Yahoo! eventually bought Overture and sued Google for patent infringement. In 2004, the two firms settled, with Google giving Yahoo! 2.7 million shares in exchange for a “fully paid, perpetual license” to over sixty Overture patents (Olsen, 2004). If an advertiser wants to display an ad on Google search, they can set up a Google AdWords advertising account in minutes, specifying just a single ad, or multiple ad campaigns that trigger different ads for different keywords. Advertisers also specify what they’re willing to pay each time an ad is clicked, how much their overall ad budget is, and they can control additional parameters, such as the timing and duration of an ad campaign. If no one clicks on an ad, Google doesn’t make money, advertisers don’t attract customers, and searchers aren’t seeing ads they’re interested in. So in order to create a winning scenario for everyone, Google has developed a precise ad ranking formula that rewards top performing ads by considering two metrics: the maximum CPC that an advertiser is willing to pay, and the advertisement’s quality score—a broad measure of ad performance. Create high quality ads and your advertisements might appear ahead of competition, even if your competitors bid more than you. But if ads perform poorly they’ll fall in rankings or even drop from display consideration. Below is the formula used by Google to determine the rank order of sponsored links appearing on search results pages. Ad Rank = Maximum CPC × Quality Score One factor that goes into determining an ad’s quality score is the click-through rate (CTR) for the ad, the number of users who clicked an ad divided by the number of times the ad was delivered (the impressions). The CTR measures the percentage of people who clicked on an ad to arrive at a destination-site. Also included in a quality score are the overall history of click performance for the keywords linked to the ad, the relevance of an ad’s text to the user’s query, and Google’s automated assessment of the user experience on the landing page—the Web site displayed when a user clicks on the ad. Ads that don’t get many clicks, ad descriptions that have nothing to do with query terms, and ads that direct users to generic pages that load slowly or aren’t strongly related to the keywords and descriptions used in an ad, will all lower an ad’s chance of being displayed2. When an ad is clicked, advertisers don’t actually pay their maximum CPC; Google discounts ads to just one cent more than the minimum necessary to maintain an ad’s position on the page. So if you bid one dollar per click, but the ad ranked below you bids ninety cents, you’ll pay just ninety-one cents if the ad is clicked. Discounting was a brilliant move. No one wants to get caught excessively overbidding rivals, so discounting helps reduce the possibility of this so-called bidder’s remorse. And with this risk minimized, the system actually encouraged higher bids (Levy, 2009)! Ad ranking and cost-per-click calculations take place as part of an automated auction that occurs every time a user conducts a search. Advertisers get a running total of ad performance statistics so that they can monitor the return on their investment and tweak promotional efforts for better results. And this whole system is automated for self-service—all it takes is a credit card, an ad idea, and you’re ready to go. • How Much Do Advertisers Pay per Click? Google rakes in billions on what amounts to pocket change earned one click at a time. Most clicks bring in between thirty cents and one dollar. However, costs can vary widely depending on industry, current competition, and perceived customer value. Table 14.1 “10 Most Expensive Industries for Keyword Ads” shows some of the highest reported CPC rates. But remember, any values fluctuate in real time based on auction participants. Table 14.1 10 Most Expensive Industries for Keyword Ads Business/Industry Keywords in the Top 25 Avg. CPC Structured Settlements 2 \$51.97 Secured Loans 2 \$50.67 Buying Endowments 1 \$50.35 Mesothelioma Lawyers 5 \$50.30 DUI Lawyers 4 \$49.78 Conference Call Companies 1 \$49.64 Car Insurance Quotes 3 \$49.61 Student Loan Consolidation 3 \$49.44 Data Recovery 2 \$49.43 Remortgages 2 \$49.42 Source: X. Becket, “10 Businesses with the Highest Cost Per Click,” WebPageFX Weekly, February 20, 2009. Since rates are based on auctions, top rates reflect what the market is willing to bear. As an example, law firms, which bring in big bucks from legal fees, decisions, and settlement payments often justify higher customer acquisition costs. And firms that see results will keep spending. Los Angeles–based Chase Law Group has said that it brings in roughly 60 percent of its clients through Internet advertising (Mann, 2006). • IP Addresses and Geotargeting Geotargeting occurs when computer systems identify a user’s physical location (sometimes called the geolocation) for the purpose of delivering tailored ads or other content. On Google AdWords, for example, advertisers can specify that their ads only appear for Web surfers located in a particular country, state, metropolitan region, or a given distance around a precise locale. They can even draw a custom ad-targeting region on a map and tell Google to only show ads to users detected inside that space. Ads in Google Search are geotargeted based on IP address. Every device connected to the Internet has a unique IP address assigned by the organization connecting the device to the network. Normally you don’t see your IP address (a set of four numbers, from 0 to 255, separated by periods; e.g., 136.167.2.220). But the range of IP addresses “owned” by major organizations and Internet service providers (ISPs) is public knowledge. In many cases it’s possible to make an accurate guess as to where a computer, laptop, or mobile phone is located simply by cross-referencing a device’s current IP address with this public list. For example, it’s known that all devices connected to the Boston College network contain IP addresses starting with the numbers 136.167. If a search engine detects a query coming from an IP address that begins with those two numbers, it can be fairly certain that the person using that device is in the greater Boston area. Figure 14.7 Figure 14.8 In this geotargeting example, the same search term is used at roughly the same time on separate computers located in Silicon Valley area (top) and Boston (bottom). Note how geotargeting impacts results. IP addresses will change depending on how and where you connect to the Internet. Connect your laptop to a hotel’s Wi-Fi when visiting a new city, and you’re likely to see ads specific to that location. That’s because your Internet service provider has changed, and the firm serving your ads has detected that you are using an IP address known to be associated with your new location. Geotargeting via IP address is fairly accurate, but it’s not perfect. For example, some Internet service providers may provide imprecise or inaccurate information on the location of their networks. Others might be so vague that it’s difficult to make a best guess at the geography behind a set of numbers (values assigned by a multinational corporation with many locations, for example). And there are other ways locations are hidden, such as when Internet users connect to proxy servers, third-party computers that pass traffic to and from a specific address without revealing the address of the connected users. What’s My IP Address? While every operating system has a control panel or command that you can use to find your current IP address, there are also several Web sites that will quickly return this value (and a best guess at your current location). One such site is http://ip-adress.com (note the spelling has only one “d”). Visit this or a similar site with a desktop, laptop, and mobile phone. Do the results differ and are they accurate? Why? Geotargeting Evolves Beyond the IP Address There are several other methods of geotargeting. Firms like Skyhook Wireless can identify a location based on its own map of Wi-Fi hotspots and nearby cell towers. Many mobile devices come equipped with global positioning system (GPS) chips (identifying location via the GPS satellite network). And if a user provides location values such as a home address or zip code to a Web site, then that value might be stored and used again to make a future guess at a user’s location. Key Takeaways • Roughly two-thirds of Google’s revenues come from ads served on its own sites, and the vast majority of this revenue comes from search engine ads. • Advertisers choose and bid on the keywords and phrases that they’d like to use to trigger the display of their ad. • Advertisers pay for cost-per-click advertising only if an ad is clicked on. Google makes no money on CPC ads that are displayed but not clicked. • Google determines ad rank by multiplying CPC by Quality Score. Ads with low ranks might not display at all. • Advertisers usually don’t pay their maximum CPC. Instead, Google discounts ads to just one cent more than the minimum necessary to maintain an ad’s position on the page—a practice that encourages higher bids. • Geotargeting occurs when computer systems identify a user’s physical location (sometimes called geolocation) for the purpose of delivering tailored ads or other content. • Google uses IP addresses to target ads. • Geotargeting can also be enabled by the satellite-based global positioning system (GPS) or based on estimating location from cell phone towers or Wi-Fi hotspots. Questions and Exercises 1. Which firm invented pay-per-click advertising? Why does Google dominate today and not this firm? 2. How are ads sold via Google search superior to conventional advertising media such as TV, radio, billboard, print, and yellow pages? Consider factors like the available inventory of space to run ads, the cost to run ads, the cost to acquire new advertisers, and the appeal among advertisers. 3. Are there certain kinds of advertising campaigns and goals where search advertising wouldn’t be a good fit? Give examples and explain why. 4. Can a firm buy a top ad ranking? Why or why not? 5. List the four factors that determine an ad’s quality score. 6. How much do firms typically pay for a single click? 7. Sites like SpyFu.com and KeywordSpy.com provide a list of the keywords with the highest cost per click. Visit the Top Lists page at SpyFu, KeywordSpy, or a comparable site, to find estimates of the current highest paying cost per click. Which keywords pay the most? Why do you think firms are willing to spend so much? 8. What is bidder’s remorse? How does Google’s ad discounting impact this phenomenon? 9. Visit http://www.ip-adress.com/ using a desktop, laptop, and mobile phone (work with a classmate or friend if you don’t have access to one of these devices). How do results differ? Why? Are they accurate? What factors go into determining the accuracy of IP-based geolocation? 10. List and briefly describe other methods of geotargeting besides IP address, and indicate the situations and devices where these methods would be more and less effective. 11. The field of search engine marketing (SEM) is relatively new and rising in importance. And since the field is so new and constantly changing, there are plenty of opportunities for young, knowledgeable professionals. Which organizations, professional certification, and other resources are available to SEM professionals? Spend some time searching for these resources online and be prepared to share your findings with your class.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/14%3A_Google-_Search_Online_Advertising_and_Beyond/14.04%3A_Search_Advertising.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand ad networks, and how ads are distributed and served based on Web site content. 2. Recognize how ad networks provide advertiser reach and support niche content providers. 3. Be aware of content adjacency problems and their implications. 4. Know the strategic factors behind ad network appeal and success. Google runs ads not just in search, but also across a host of Google-owned sites like Gmail, Google News, and Blogger. It will even tailor ads for its map products and for mobile devices. But about 30 percent of Google’s revenues come from running ads on Web sites that the firm doesn’t even own1. Next time you’re surfing online, look around the different Web sites that you visit and see how many sport boxes labeled “Ads by Google.” Those Web sites are participating in Google’s AdSense ad network, which means they’re running ads for Google in exchange for a cut of the take. Participants range from small-time bloggers to some of the world’s most highly trafficked sites. Google lines up the advertisers, provides the targeting technology, serves the ads, and handles advertiser payment collection. To participate, content providers just sign up online, put a bit of Google-supplied HTML code on their pages, and wait for Google to send them cash (Web sites typically get about seventy to eighty cents for every AdSense dollar that Google collects) (Tedeschi, 2006). Google originally developed AdSense to target ads based on keywords automatically detected inside the content of a Web site. A blog post on your favorite sports team, for example, might be accompanied by ads from ticket sellers or sports memorabilia vendors. AdSense and similar online ad networks provide advertisers with access to the long tail of niche Web sites by offering both increased opportunities for ad exposure as well as more-refined targeting opportunities. Figure 14.9 New York Times Web site. The page runs several ads provided by different ad networks. For example, the WebEx banner ad above the article’s headline was served by AOL-owned Platform-A/Tacoda. The “Ads by Google” box appeared at the end of the article. Note how the Google ads are related to the content of the Times article.” style=”max-width: 497px;”/> The images above show advertising embedded around a story on the New York Times Web site. The page runs several ads provided by different ad networks. For example, the WebEx banner ad above the article’s headline was served by AOL-owned Platform-A/Tacoda. The “Ads by Google” box appeared at the end of the article. Note how the Google ads are related to the content of the Times article. Running ads on your Web site is by no means a guaranteed path to profits. The Internet graveyard is full of firms that thought they’d be able to sustain their businesses on ads alone. But for many Web sites, ad networks can be like oxygen, sustaining them with revenue opportunities they’d never be able to achieve on their own. For example, AdSense provided early revenue for the popular social news site Digg, as well as the multimillion-dollar TechCrunch media empire. It supports Disaboom, a site run by physician and quadriplegic Dr. Glen House. And it continues to be the primary revenue generator for AskTheBuilder.com. That site’s founder, former builder Tim Carter, had been writing a handyman’s column syndicated to some thirty newspapers. The newspaper columns didn’t bring in enough to pay the bills, but with AdSense he hit pay dirt, pulling in over \$350,000 in ad revenue in just his first year (Rothenberg, 2008)! Figure 14.10 Tim Carter’s Ask the Builder Web site runs ads from Google and other ad networks. Note different ad formats surrounding the content. Video ads are also integrated into many of the site’s video tutorials. Beware the Content Adjacency Problem Contextual advertising based on keywords is lucrative, but like all technology solutions it has its limitations. Vendors sometimes suffer from content adjacency problems when ads appear alongside text they’d prefer to avoid. In one particularly embarrassing example, a New York Post article detailed a gruesome murder where hacked up body parts were stowed in suitcases. The online version of the article included contextual advertising and was accompanied by…luggage ads (Overholt, 2007). To combat embarrassment, ad networks provide opportunities for both advertisers and content providers to screen out potentially undesirable pairings based on factors like vendor, Web site, and category. Advertisers can also use negative keywords, which tell networks to avoid showing ads when specific words appear (e.g., setting negative keywords to “murder” or “killer” could have spared luggage advertisers from the embarrassing problem mentioned above). Ad networks also refine ad-placement software based on feedback from prior incidents (for more on content adjacency problems, see Chapter 8 “Facebook: Building a Business from the Social Graph”). Google launched AdSense in 2003, but Google is by no means the only company to run an ad network, nor was it the first to come up with the idea. Rivals include the Yahoo! Publisher Network, Microsoft’s adCenter, and AOL’s Platform-A. Others, like Quigo, don’t even have a consumer Web site yet manage to consolidate enough advertisers to attract high-traffic content providers such as ESPN, Forbes, Fox, and USA Today. Advertisers also aren’t limited to choosing just one ad network. In fact, many content provider Web sites will serve ads from several ad networks (as well as exclusive space sold by their own sales force), oftentimes mixing several different offerings on the same page. • Ad Networks and Competitive Advantage While advertisers can use multiple ad networks, there are several key strategic factors driving the industry. For Google, its ad network is a distribution play. The ability to reach more potential customers across more Web sites attracts more advertisers to Google. And content providers (the Web sites that distribute these ads) want there to be as many advertisers as possible in the ad networks that they join, since this should increase the price of advertising, the number of ads served, and the accuracy of user targeting. If advertisers attract content providers, which in turn attract more advertisers, then we’ve just described network effects! More participants bringing in more revenue also help the firm benefit from scale economies—offering a better return on investment from its ad technology and infrastructure. No wonder Google’s been on such a tear—the firm’s loaded with assets for competitive advantage! Google’s Ad Reach Gets Bigger While Google has the largest network specializing in distributing text ads, it had been a laggard in graphical display ads (sometimes called image ads). That changed in 2008, with the firm’s \$3.1 billion acquisition of display ad network and targeting company DoubleClick. Now in terms of the number of users reached, Google controls both the largest text ad network and the largest display ad network (Baker, 2008). Key Takeaways • Google also serves ads through non-Google partner sites that join its ad network. These partners distribute ads for Google in exchange for a percentage of the take. • AdSense ads are targeted based on keywords that Google detects inside the content of a Web site. • AdSense and similar online ad networks provide advertisers with access to the long tail of niche Web sites. • Ad networks handle advertiser recruitment, ad serving, and revenue collection, opening up revenue earning possibilities to even the smallest publishers. Questions and Exercises 1. On a percentage basis, how important is AdSense to Google’s revenues? 2. Why do ad networks appeal to advertisers? What do they appeal to content providers? What functions are assumed by the firm overseeing the ad network? 3. What factors determine the appeal of an ad network to advertisers and content providers? Which of these factors are potentially sources of competitive advantage? 4. Do dominant ad networks enjoy strong network effects? Are there also strong network effects that drive consumers to search? Why or why not? 5. How difficult is it for a Web site to join an ad network? What does this imply about ad network switching costs? Does it have to exclusively choose one network over another? Does ad network membership prevent a firm from selling its own online advertising, too? 6. What is the content adjacency problem? Why does it occur? What classifications of Web sites might be particularly susceptible to the content adjacency problem? What can advertisers do to minimize the likelihood that a content adjacency problem will occur?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/14%3A_Google-_Search_Online_Advertising_and_Beyond/14.05%3A_Ad_NetworksDistribution_beyond_Search.txt
Learning Objectives After studying this section you should be able to do the following: 1. Know the different formats and media types that Web ads can be displayed in. 2. Know the different ways ads are sold. 3. Know that games can be an ad channel under the correct conditions. Online ads aren’t just about text ads billed in CPC. Ads running through Google AdSense, through its DoubleClick subsidiary, or on most competitor networks can be displayed in several formats and media types, and can be billed in different ways. The specific ad formats supported depend on the ad network but can include the following: image (or display) ads (such as horizontally oriented banners, smaller rectangular buttons, and vertically oriented “skyscraper” ads); rich media ads (which can include animation or video); and interstitials (ads that run before a user arrives at a Web site’s contents). The industry trade group, the Internet Advertising Bureau (IAB) sets common standards for display ads so that a single creative (the design and content of the advertisement) can run unmodified across multiple ad networks and Web sites1. And there are lots of other ways ads are sold besides cost-per-click. Most graphical display ads are sold according to the number of times the ad appears (the impression). Ad rates are quoted in CPM, meaning cost per thousand impressions (the M representing the roman numerical for one thousand). Display ads sold on a CPM basis are often used as part of branding campaigns targeted more at creating awareness than generating click-throughs. Such techniques often work best for promoting products like soft drinks, toothpaste, or movies. Cost-per-action (CPA) ads pay whenever a user clicks through and performs a specified action such as signing up for a service, requesting material, or making a purchase. Affiliate programs are a form of cost-per-action, where vendors share a percentage of revenue with Web sites that direct purchasing customers to their online storefronts. Amazon runs the world’s largest affiliate program, and referring sites can earn 4 percent to 15 percent of sales generated from these click-throughs. Purists might not consider affiliate programs as advertising (rather than text or banner ads, Amazon’s affiliates offer links and product descriptions that point back to Amazon’s Web site), but these programs can be important tools in a firm’s promotional arsenal. And rather than buying targeted ads, a firm might sometimes opt to become an exclusive advertiser on a site. For example, a firm could buy access to all ads served on a site’s main page; it could secure exclusive access to a region of the page (such as the topmost banner ad); or it may pay to sponsor a particular portion or activity on a Web site (say a parenting forum, or a “click-to-print” button). Such deals can be billed based on a flat rate, CPM, CPC, or any combination of metrics. Ads in Games? As consumers spend more time in video games, it’s only natural that these products become ad channels, too. Finding a sensitive mix that introduces ads without eroding the game experience can be a challenge. Advertising can work in racing or other sports games (in 2008 the Obama campaign famously ran virtual billboards in EA’s Burnout Paradise), but ads make less sense for games set in the past, future, or on other worlds. Branding ads often work best, since click-throughs are typically not something you want disrupting your gaming experience. Advertisers have also explored sponsorships of Web-based and mobile games. Sponsorships often work best with casual games, such as those offered on Yahoo! Games or EA’s Pogo. Firms have also created online mini games (so-called advergames) for longer term, immersive brand engagement (e.g., Mini Cooper’s Slide Parking and Stride Gum’s Chew Challenge). Others have tried a sort of virtual product placement integrated into experiences. A version of The Sims, for example, included virtual replicas of real-world inventory from IKEA and H&M. Figure 14.11 Obama Campaign’s Virtual Billboard in EA’s Burnout Paradise Hyperakt – Billboard for Obama in NYC – CC BY NC-SA 2.0. In-game ad-serving technology also lacks the widely accepted standards of Web-based ads, so it’s unlikely that ads designed for a Wii sports game could translate into a PS3 first-person shooter. Also, one of the largest in-game ad networks, Massive, is owned by Microsoft. That’s good if you want to run ads on Xbox, but Microsoft isn’t exactly a firm that Nintendo or Sony want to play nice with. In-game advertising shows promise, but the medium is considerably more complicated than conventional Web site ads. That complexity lowers relative ROI and will likely continue to constrain growth. Key Takeaways • Web ad formats include, but are not limited to, the following: image (or display) ads (such as horizontally oriented banners, smaller rectangular buttons, and vertically oriented skyscraper ads), rich media ads (which can include animation or video), and interstitials (ads that run before a user arrives at a Web site’s contents). • In addition to cost-per-click, ads can be sold based on the number of times the ad appears (impressions), whenever a user performs a specified action such as signing up for a service, requesting material, or making a purchase (cost-per-action), or on an exclusive basis which may be billed at a flat rate. • In-game advertising shows promise, with successful branding campaigns run as part of sports games, through in-game product placement, or via sponsorship of casual games, or in brand-focused advergames. • A lack of standards, concerns regarding compatibility with gameplay, and the cost of developing and distributing games are all stifling the growth of in-game ads. Questions and Exercises 1. What is the IAB and why is it necessary? 2. What are the major ad format categories? 3. What’s an interstitial? What’s a rich media ad? Have you seen these? Do you think they are effective? Why or why not? 4. List four major methods for billing online advertising. 5. Which method is used to bill most graphical advertising? What’s the term used for this method and what does it stand for? 6. How many impressions are recorded if a single user is served the same ad one thousand times? How many if one thousand users are served the same ad once? 7. Imagine the two scenarios below. Decide which type of campaign would be best for each: text-based CPC advertising or image ads paid for on a CPM basis). Explain your reasoning. 1. Netflix is looking to attract new customers by driving traffic to its Web site and increase online subscriptions. 2. Zara has just opened a new clothing store in major retailing area in your town. The company doesn’t offer online sales; rather, the majority of its sales come from stores. 8. Which firm runs the world’s largest affiliate program? Why is this form of advertising particularly advantageous to the firm (think about the ROI for this sort of effort)? 9. Given examples where in-game advertising might work and those where it might be less desirable. List key reasons why in-game advertising has not be as successful as other forms Internet-distributed ads. 1See Interactive Advertising Bureau Ad Unit Guidelines for details at http://www.iab.net/iab_products_and_...1421/1443/1452.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/14%3A_Google-_Search_Online_Advertising_and_Beyond/14.06%3A_More_Ad_Formats_and_Payment_Schemes.txt
Learning Objectives After studying this section you should be able to do the following: 1. Be familiar with various tracking technologies and how they are used for customer profiling and ad targeting. 2. Understand why customer profiling is both valuable and controversial. 3. Recognize steps that organizations can take to help ease consumer and governmental concerns. Advertisers are willing to pay more for ads that have a greater chance of reaching their target audience, and online firms have a number of targeting tools at their disposal. Much of this targeting occurs whenever you visit a Web site, where a behind-the-scenes software dialogue takes place between Web browser and Web server that can reveal a number of pieces of information, including IP address, the type of browser used, the computer type, its operating system, and unique identifiers, called cookies. And remember, any server that serves you content can leverage these profiling technologies. You might be profiled not just by the Web site that you’re visiting (e.g., nytimes.com), but also by any ad networks that serve ads on that site (e.g., Platform-A, DoubleClick, Google AdSense, Microsoft adCenter). IP addresses are leveraged extensively in customer profiling. An IP address not only helps with geolocation, it can also indicate a browser’s employer or university, which can be further matched with information such as firm size or industry. IBM has used IP targeting to tailor its college recruiting banner ads to specific schools, for example, “There Is Life After Boston College, Click Here to See Why.” That campaign garnered click-through rates ranging from 5.0 to 30 percent (Moss, 1999) compared to average rates that are currently well below 1 percent for untargeted banner ads. DoubleClick once even served a banner that included a personal message for an executive at then-client Modem Media. The ad, reading “Congratulations on the twins, John Nardone,” was served across hundreds of sites, but was only visible from computers on the Modem Media corporate network (Moss, 1999). The ability to identify a surfer’s computer, browser, or operating system can also be used to target tech ads. For example, Google might pitch its Chrome browser to users detected running Internet Explorer, Firefox, or Safari; while Apple could target those “I’m a Mac” ads just to Windows users. But perhaps the greatest degree of personalization and targeting comes from cookies. Visit a Web site for the first time, and in most cases, a behind-the-scenes dialogue takes place that goes something like this: Server: Have I seen you before? Browser: No. Server: Then take this unique string of numbers and letters (called a cookie). I’ll use it to recognize you from now on. The cookie is just a line of identifying text assigned and retrieved by a given Web server and stored on your computer by your browser. Upon accepting this cookie your browser has been tagged, like an animal. As you surf around the firm’s Web site, that cookie can be used to build a profile associated with your activities. If you’re on a portal like Yahoo! you might type in your zip code, enter stocks that you’d like to track, and identify the sports teams you’d like to see scores for. The next time you return to the Web site, your browser responds to the server’s “Have I see you before?” question with the equivalent of “Yes, you know me;,” and it presents the cookie that the site gave you earlier. The site can then match this cookie against your browsing profile, showing you the weather, stock quotes, sports scores, and other info that it thinks you’re interested in. Cookies are used for lots of purposes. Retail Web sites like Amazon use cookies to pay attention to what you’ve shopped for and bought, tailoring Web sites to display products that the firm suspects you’ll be most interested in. Sites also use cookies to keep track of what you put in an online “shopping cart,” so if you quit browsing before making a purchase, these items will reappear the next time you visit. And many Web sites also use cookies as part of a “remember me” feature, storing user IDs and passwords. Beware this last one! If you check the “remember me” box on a public Web browser, the next person who uses that browser is potentially using your cookie, and can log in as you! An organization can’t read cookies that it did not give you. So businessweek.com can’t tell if you’ve also got cookies from forbes.com. But you can see all of the cookies in your browser. Take a look and you’ll almost certainly see cookies from dozens of Web sites that you’ve never visited before. These are third-party cookies (sometimes called tracking cookies), and they are usually served by ad networks or other customer profiling firms. Figure 14.12 The Preferences setting in most Web browsers allows you to see its cookies. This browser has received cookies from several ad networks, media sites, and the University of Minnesota Carlson School of Management. By serving and tracking cookies in ads shown across partner sites, ad networks can build detailed browsing profiles that include sites visited, specific pages viewed, duration of visit, and the types of ads you’ve seen and responded to. And that surfing might give an advertising network a better guess at demographics like gender, age, marital status, and more. Visit a new parent site and expect to see diaper ads in the future, even when you’re surfing for news or sports scores! But What If I Don’t Want a Cookie! If all of this creeps you out, remember that you’re in control. The most popular Web browsers allow you to block all cookies, block just third-party cookies, purge your cookie file, or even ask for your approval before accepting a cookie. Of course, if you block cookies, you block any benefits that come along with them, and some Web site features may require cookies to work properly. Also note that while deleting a cookie breaks a link between your browser and that Web site, if you supply identifying information in the future (say by logging into an old profile), the site might be able to assign your old profile data to the new cookie. While the Internet offers targeting technologies that go way beyond traditional television, print, and radio offerings, none of these techniques is perfect. Since users are regularly assigned different IP addresses as they connect and disconnect from various physical and Wi-Fi networks, IP targeting can’t reliably identify individual users. Cookies also have their weaknesses. They’re assigned by browsers and associated with a log-in account profile on that computer. That means that if several people use the same browser on the same computer without logging on to that machine as separate users, then all their Web surfing activity may be mixed into the same cookie profile. (One solution is to create different log-in accounts on that computer. Your PC will then keep separate cookies for each account.) Some users might also use different browsers on the same machine, or use different computers. Unless a firm has a way to match up these different cookies with a single user account or other user-identifying information, a site may be working with multiple, incomplete profiles. Key Takeaways • The communication between Web browser and Web server can identify IP address, the type of browser used, the computer type, its operating system, time and date of access, and duration of Web page visit, and can read and assign unique identifiers, called cookies—all of which can be used in customer profiling and ad targeting. • An IP address not only helps with geolocation; it can also be matched against other databases to identify the organization providing the user with Internet access (such as a firm or university), that organization’s industry, size, and related statistics. • A cookie is a unique line of identifying text, assigned and retrieved by a given Web server and stored on a computer by the browser, that can be used to build a profile associated with your Web activities. • The most popular Web browsers allow you to block all cookies, block just third-party cookies, purge your cookie file, or even ask for your approval before accepting a cookie. Questions and Exercises 1. Give examples of how the ability to identify a surfer’s computer, browser, or operating system can be used to target tech ads. 2. Describe how IBM targeted ad delivery for its college recruiting efforts. What technologies were used? What was the impact on click-through rates? 3. What is a cookie? How are cookies used? Is a cookie a computer program? Which firms can read the cookies in your Web browser? 4. Does a cookie accurately identify a user? Why or why not? 5. What is the danger of checking the “remember me” box on a public Web browser? 6. What’s a third-party cookie? What kinds of firms might use these? How are they used? 7. How can users restrict cookie use on their Web browsers? What is the downside of blocking cookies? 8. Work with a faculty member and join the Google Online Marketing Challenge (held spring of every year—see http://www.google.com/onlinechallenge). Google offers ad credits for student teams to develop and run online ad campaigns for real clients and offers prizes for winning teams. Some of the experiences earned in the Google Challenge can translate to other ad networks as well; and first-hand client experience has helped many students secure jobs, internships, and even start their own businesses.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/14%3A_Google-_Search_Online_Advertising_and_Beyond/14.07%3A_Customer_Profiling_and_Behavioral_Targeting.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand the privacy concerns that arise as a result of using third-party or tracking cookies to build user profiles. 2. Be aware of the negative consequences that could result from the misuse of third-party or tracking cookies. 3. Know the steps Google has taken to demonstrate its sensitivity to privacy issues. 4. Know the kinds of user information that Google stores, and the steps Google takes to protect the privacy of that information. While AdSense has been wildly successful, contextual advertising has its limits. For example, what kind of useful targeting can firms really do based on the text of a news item on North Korean nuclear testing (Singel, 2009)? So in March 2009, Google announced what it calls “interest-based ads.” Google AdSense would now issue a third-party cookie and would track browsing activity across AdSense partner sites, and Google-owned YouTube (the firm had not previously used tracking cookies on its AdSense network). AdSense would build a profile, initially identifying users within thirty broad categories and six hundred subcategories. Says one Google project manager, “We’re looking to make ads even more interesting” (Hof, 2009). Of course, there’s a financial incentive to do this too. Ads deemed more interesting should garner more clicks, meaning more potential customer leads for advertisers, more revenue for Web sites that run AdSense, and more money for Google. But while targeting can benefit Web surfers, users will resist if they feel that they are being mistreated, exploited, or put at risk. Negative backlash might also result in a change in legislation. The U.S. Federal Trade Commission has already called for more transparency and user control in online advertising and for requesting user consent (opt-in) when collecting sensitive data (Singel, 2009). Mishandled user privacy could curtail targeting opportunities, limiting growth across the online advertising field. And with less ad support, many of the Internet’s free services could suffer. Google’s roll-out of interest-based ads shows the firm’s sensitivity to these issues. First, while major rivals have all linked query history to ad targeting, Google steadfastly refuses to do this. Other sites often link registration data (including user-submitted demographics such as gender and age) with tracking cookies, but Google avoids this practice as well. Figure 14.13 Here’s an example of one user’s interests, as tracked by Google’s “Interest-based Ads” and displayed in the firm’s “Ad Preferences Manager.” Google has also placed significant control in the hands of users, with options at program launch that were notably more robust than those of its competitors (Hansell, 2009). Each interest-based ad is accompanied by an “Ads by Google” link that will bring users to a page describing Google advertising and which provides access to the company’s “Ads Preferences Manager.” This tool allows surfers to see any of the hundreds of potential categorizations that Google has assigned to that browser’s tracking cookie. Users can remove categorizations, and even add interests if they want to improve ad targeting. Some topics are too sensitive to track, and the technology avoids profiling race, religion, sexual orientation, health, political or trade union affiliation, and certain financial categories (Mithcell, 2009). Google also allows users to install a cookie that opts them out of interest-based tracking. And since browser cookies can expire or be deleted, the firm has gone a step further, offering a browser plug-in that will remain permanent, even if a user’s opt-out cookie is purged. Google, Privacy Advocates, and the Law Google’s moves are meant to demonstrate transparency in its ad targeting technology, and the firm’s policies may help raise the collective privacy bar for the industry. While privacy advocates have praised Google’s efforts to put more control in the hands of users, many continue to voice concern over what they see as the increasing amount of information that the firm houses (Helft, 2009). For an avid user, Google could conceivably be holding e-mail (Gmail), photos (Picasa), a Web surfing profile (AdSense and DoubleClick), medical records (Google Health), location (Google Latitude), appointments (Google Calendar), transcripts of phone messages (Google Voice), work files (Google Docs), and more. Google insists that reports portraying it as a data-hoarding Big Brother are inaccurate. The firm is adamant that user data exists in silos that aren’t federated (linked) in any way, nor are employees permitted access to multiple data archives without extensive clearance and monitoring. Data is not sold to third parties. Activities in Gmail, Docs, or most other services isn’t added to targeting profiles. And any targeting is fully disclosed, with users empowered to opt out at all levels (Mitchell, 2009). But critics counter that corporate intentions and data use policies (articulated in a Web site’s Terms of Service) can change over time, and that a firm’s good behavior today is no guarantee of good behavior in the future (Mitchell, 2009). Google does enjoy a lot of user goodwill, and it is widely recognized for its unofficial motto “Don’t Be Evil.” However, some worry that even though Google might not be evil, it could still make a mistake, and that despite its best intentions, a security breach or employee error could leave data dangerously or embarrassingly exposed. Such gaffes and oversights have happened. A March 2009 system flaw inadvertently shared some Google Docs with contacts who were never granted access to them (Kincaid, 2009). And when the firm introduced its Google Buzz social networking service in early 2010, many users were horrified that their most frequently used Gmail contacts were automatically added to Buzz, allowing others to see who you’re communicating with. As one report explained, “Suddenly, journalists’ clandestine contacts were exposed, secret affairs became dramatically less secret, and stalkers obtained a new tool to harass their victims. Oops” (Gold, 2010). Eleven congressmen subsequently asked the U.S. Federal Trade Commission to investigate the Google Buzz for possible breaches of consumer privacy (Gross, 2010). Privacy advocates also worry that the amount of data stored by Google serves as one-stop shopping for litigators and government investigators. The counter argument points to the fact that Google has continually reflected an aggressive defense of data privacy in court cases. When Viacom sued Google over copyright violations in YouTube, the search giant successfully fought the original subpoena, which had requested user-identifying information (Mitchell, 2009). And Google was the only one of the four largest search engines to resist a 2006 Justice Department subpoena for search queries (Broache, 2006). Google is increasingly finding itself in precedent-setting cases where the law is vague. Google’s Street View, for example, has been the target of legal action in the United States, Canada, Japan, Greece, and the United Kingdom. Varying legal environments create a challenge to the global rollout of any data-driven initiative (Sumagaysay, 2009). Ad targeting brings to a head issues of opportunity, privacy, security, risk, and legislation. Google is now taking a more active public relations and lobbying role to prevent misperceptions and to be sure its positions are understood. While the field continues to evolve, Google’s experience will lay the groundwork for the future of personalized technology and provide a case study for other firms that need to strike the right balance between utility and privacy. Despite differences, it seems clear to Google, its advocates, and its detractors that with great power comes great responsibility. Key Takeaways • Possible consequences resulting from the misuse of customer tracking and profiling technologies include user resistance and legislation. Mishandled user privacy could curtail targeting opportunities and limit growth in online advertising. With less ad support, many of the Internet’s free services could suffer. • Google has taken several steps to protect user privacy and has thus far refused to link query history or registration data to ad targeting. • Google’s “Ads Preferences Manager” allows surfers to see, remove, and add to, any of the categorizations that Google has assigned to that browser’s tracking cookie. The technology also avoids targeting certain sensitive topics. • Google allows users to install a cookie or plug-in that opts them out of interest-based tracking. • Some privacy advocates have voiced concern over what they see as the increasing amount of information that Google houses. • Even the best-intentioned and most competent firms can have a security breach that compromises stored information. Google has suffered privacy breaches from product flaws and poorly planned feature rollouts. Such issues may lead to further investigation, legislation, and regulation. Questions and Exercises 1. Gmail uses contextual advertising. The service will scan the contents of e-mail messages and display ads off to the side. Test the “creep out” factor in Gmail—create an account (if you don’t already have one), and send messages to yourself with controversial terms in them. Which ones showed ads? Which ones didn’t? 2. Google has never built user profiles based on Gmail messages. Ads are served based on a real-time scanning of keywords. Is this enough to make you comfortable with Google’s protection of your own privacy? Why or why not? 3. List the negative consequences that could result from the misuse of tracking cookies. 4. What steps has Google taken to give users control over the ads they wish to see? 5. Which topics does “Ads Preferences Manager” avoid in its targeting system? 6. Visit Google’s Ad Preferences page. Is Google tracking your interests? Do you think the list of interests is accurate? Browse the categories under the “Ad Interest” button. Would you add any of these categories to your profile? Why or why not? What do you gain or lose by taking advantage of Google’s “Opt Out” option? Visit rival ad networks. Do you have a similar degree of control? More or less? 7. List the types of information that Google might store for an individual. Do you feel that Google is a fair and reliable steward for this information? Are there Google services or other online efforts that you won’t use due to privacy concerns? Why? 8. What steps does Google take to protect the privacy of user information? 9. Google’s “interest-based advertising” was launched as an opt-out effort. What are the pros and cons for Google, users, advertisers, and AdSense partner sites if Google were to switch to an opt-in system? How would these various constituencies be impacted if the government mandated that users explicitly opt in to third-party cookies and other behavior-tracking techniques? 10. What is Google’s unofficial motto? 11. What is “Street View”? Where and on what grounds is it being challenged? 12. Cite two court cases where Google has mounted a vigorous defense of data privacy. 13. Wired News quoted a representative of privacy watchdog group, The Center for Digital Democracy, who offered a criticism of online advertising. The representative suggested that online firms were trying to learn “everything about individuals and manipulate their weaknesses” and that the federal government should “investigate the role [that online ads] played in convincing people to take out mortgages they should not have” (Singel, 2009). Do you think online advertising played a significant role in the mortgage crisis? What role do advertisers, ad networks, and content providers have in online advertising oversight? Should this responsibility be any different from oversight in traditional media (television, print, radio)? What guidelines would you suggest? 14. Even well-intentioned firms can compromise user privacy. How have Google’s missteps compromised user privacy? As a manager, what steps would you take in developing and deploying information systems that might prevent these kinds of problems from occurring?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/14%3A_Google-_Search_Online_Advertising_and_Beyond/14.08%3A_Profiling_and_Privacy.txt
Learning Objectives After studying this section you should be able to do the following: 1. Be able to identify various types of online fraud, as well as the techniques and technologies used to perpetrate these crimes. 2. Understand how firms can detect, prevent, and prosecute fraudsters. There’s a lot of money to be made online, and this has drawn the attention of criminals and the nefarious. Online fraudsters may attempt to steal from advertisers, harm rivals, or otherwise dishonestly game the system. But bad guys beware—such attempts violate terms-of-service agreements and may lead to prosecution and jail time. Studying ad-related fraud helps marketers, managers, and technologists understand potential vulnerabilities, as well as the methods used to combat them. This process also builds tech-centric critical thinking, valuation, and risk assessment skills. Some of the more common types of fraud that are attempted in online advertising include the following: • Enrichingclick fraud—when site operators generate bogus ad clicks to earn PPC income. • Enriching impression fraud—when site operators generate false page views (and hence ad impressions) in order to boost their site’s CPM earnings. • Depleting click fraud—clicking a rival’s ads to exhaust their PPC advertising budget. • Depleting impression fraud—generating bogus impressions to exhaust a rival’s CPM ad budget. • Rank-based impression fraud—on-sites where ad rank is based on click performance, fraudsters repeatedly search keywords linked to rival ads or access pages where rival ads appear. The goal is to generate impressions without clicks. This process lowers the performance rank (quality score) of a rival’s ads, possibly dropping ads from rank results, and allowing fraudsters to subsequently bid less for the advertising slots previously occupied by rivals. • Disbarring fraud—attempting to frame a rival by generating bogus clicks or impressions that appear to be associated with the rival, in hopes that this rival will be banned from an ad network or punished in search engine listings. • Link fraud (also known as spamdexing or link farming)—creating a series of bogus Web sites, all linking back to a page, in hopes of increasing that page’s results in organic search. • Keyword stuffing—packing a Web site with unrelated keywords (sometimes hidden in fonts that are the same color as a Web site’s background) in hopes of either luring users who wouldn’t normally visit a Web site, or attracting higher-value contextual ads. Disturbing stuff, but firms are after the bad guys and they’ve put their best geeks on the case. Widespread fraud would tank advertiser ROI and crater the online advertising market, so Google and rivals are diligently working to uncover and prosecute the crooks. • Busting the Bad Guys On the surface, enriching click fraud seems the easiest to exploit. Just set up a Web site, run PPC ads on the page, and click like crazy. Each click should ring the ad network cash register, and a portion of those funds will be passed on to the perpetrating site owner—ka ching! But remember, each visitor is identified by an IP address, so lots of clicks from a single IP make the bad guys easy to spot. So organized crime tried to raise the bar, running so-called click farms to spread fraud across dozens of IP addresses. The Times of India uncovered one such effort where Indian housewives were receiving up to twenty-five cents for each ad click made on fraudster-run Web sites (Vidyasagar, 2004). But an unusually large number of clicks from Indian IP addresses foiled these schemes as well. Fraudsters then moved on to use zombie networks—hordes of surreptitiously infiltrated computers, linked and controlled by rogue software (Mann, 2006). To create zombie networks (sometimes called bot nets), hackers exploit security holes, spread viruses, or use so-called phishing techniques to trick users into installing software that will lie dormant, awaiting commands from a central location. The controlling machine then sends out tasks for each zombie, instructing them to visit Web sites and click on ads in a way that mimics real traffic. Zombie bot nets can be massive. Dutch authorities once took down a gang that controlled some 1.5 million machines (Sanders, 2007; Daswani & Stoppleman, 2007). Scary, but this is where scale, expertise, and experience come in. The more activity an ad network can monitor, the greater the chance that it can uncover patterns that are anomalous. Higher click-through rates than comparable sites? Caught. Too many visits to a new or obscure site? Caught. Clicks that don’t fit standard surfing patterns for geography, time, and day? Caught. Sometimes the goal isn’t theft, but sabotage. Google’s Ad Traffic Quality Team backtracked through unusual patterns to uncover a protest effort targeted at Japanese credit card firms. Ad clicks were eventually traced to an incendiary blogger who incited readers to search for the Japanese word kiyashinku (meaning cashing credit, or credit cards), and to click the credit card firm ads that show up, depleting firm search marketing budgets. Sneaky, but uncovered and shut down, without harm to the advertisers (Jakobsson & Ramzan, 2008). Search firm and ad network software can use data patterns and other signals to ferret out most other types of fraud, too, including rank-based impression fraud, spamdexing, and keyword stuffing. While many have tried to up the stakes with increasingly sophisticated attacks, large ad networks have worked to match them, increasing their anomaly detection capabilities across all types of fraud (Jakobsson & Ramzan, 2008). Here we see another scale and data-based advantage for Google. Since the firm serves more search results and advertisements than its rivals do, it has vastly more information on online activity. And if it knows more about what’s happening online than any other firm, it’s likely to be first to shut down anyone who tries to take advantage of the system. Click Fraud: How Bad Is It? Accounts on the actual rate of click fraud vary widely. Some third-party firms contend that nearly one in five clicks is fraudulent (Hamner, 2009). But Google adamantly disputes these headline-grabbing numbers, claiming that many such reports are based on-site logs that reflect false data from conditions that Google doesn’t charge for (e.g., double counting a double click, or adding up repeated use of the browser back button in a way that looks like multiple clicks have occurred). The firm also offers monitoring, analytics, and reporting tools that can uncover this kind of misperceived discrepancy. Google contends that all invalid clicks (mistakes and fraud) represent less than 10 percent of all clicks, that the vast majority of these clicks are filtered out, and that Google doesn’t charge advertisers for clicks flagged as mistakes or suspicious (Lafsky, 2008). In fact, Google says their screening bar is so high and so accurate that less than 0.02 percent of clicks are reactively classified as invalid and credited back to advertisers (Jakobsson & Ramzan, 2008). So who’s right? While it’s impossible to identify the intention behind every click, the market ultimately pays for performance. And advertisers are continuing to flock to PPC ad networks (and to Google in particular). While that doesn’t mean that firms can stop being vigilant, it does suggest that for most firms, Google seems to have the problem under control. Key Takeaways • Fraud can undermine the revenue model behind search engines, ad networks, and the ad-based Internet. It also threatens honest competition among rivals that advertise online. • There are many forms of online fraud, including enriching fraud (meant to line the pockets of the perpetrators), depleting fraud (meant to waste the ad budgets of rivals), disbarring fraud (meant to frame the innocent as fraudsters), and methods to lower rival ad rank performance, or gain search engine ranking algorithms. • While fraudsters have devised ingenious ways to exploit the system (including click farms and zombie attacks), IP addresses and detailed usage pattern monitoring increasingly reveal bogus activity. • Fraud rates are widely disputed. However, it is clear that if widespread fraud were allowed to occur, advertisers would see lower ROI from online ad efforts, and Internet business models would suffer. The continued strength of the online advertising market suggests that while fraud may be impossible to stop completely, most fraud is under control. Questions and Exercises 1. Why is it difficult for an unscrupulous individual to pull off enriching click fraud simply by setting up a Web site, running ad network ads, and clicking? 2. Why did hackers develop zombie networks? What advantage do they offer the criminals? How are they detected? Why do larger ad networks have an advantage in click fraud detection? 3. How can you prevent zombies from inhabiting your computers? Are you reasonably confident you are “zombie-free?” Why or why not? 4. What are spamdexing and keyword stuffing? What risks does a legitimate business run if it engages in these practices, and if they are discovered by search engines? What would this mean for the career of the manager who thought he could game the system? 5. Which types of fraud can be attempted against search advertising? Which are perpetrated over its ad network? 6. What are the consequences if click fraud were allowed to continue? Does this ultimately help or hurt firms that run ad networks? Why?
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/14%3A_Google-_Search_Online_Advertising_and_Beyond/14.09%3A_Search_Engines_Ad_Networks_and_Fraud.txt
Learning Objectives After studying this section you should be able to do the following: 1. Understand the challenges of maintaining growth as a business and industry mature. 2. Recognize how the businesses of many firms in a variety of industries are beginning to converge. 3. Critically evaluate the risks and challenges of businesses that Google, Microsoft, and other firms are entering. 4. Appreciate the magnitude of this impending competition, and recognize the competitive forces that will help distinguish winners from losers. Google has been growing like gangbusters, but the firm’s twin engines of revenue growth—ads served on search and through its ad networks—will inevitably mature. And it will likely be difficult for Google to find new growth markets that are as lucrative as these. Emerging advertising outlets such as social networks and mobile have lower click-through rates than conventional advertising, suggesting that Google will have to work harder for less money. For a look at what can happen when maturity hits, check out Microsoft. The House that Gates Built is more profitable than Google, and continues to dominate the incredibly lucrative markets served by Windows and Office. But these markets haven’t grown much for over a decade. In industrialized nations, most Windows and Office purchases come not from growth, but when existing users upgrade or buy new machines. And without substantial year-on-year growth, the stock price doesn’t move. Figure 14.14 A Comparison of Roughly Five Years of Stock Price Change—Google (GOOG) versus Microsoft (MSFT) For big firms like Microsoft and Google, pushing stock price north requires not just new markets, but billion-dollar ones. Adding even \$100 million in new revenues doesn’t do much for firms bringing in \$24 billion and \$58 billion a year, respectively. That’s why you see Microsoft swinging for the fences, investing in the uncertain, but potentially gargantuan markets of video games, mobile phone software, cloud computing (see Chapter 10 “Software in Flux: Partly Cloudy and Sometimes Free”), music and video, and of course, search and everything else that fuels online ad revenue. Search: Google Rules, but It Ain’t Over PageRank is by no means the last word in search, and offerings from Google and its rivals continue to evolve. Google supplements PageRank results with news, photos, video, and other categorizations (click the “Show options…” link above your next Google search). Yahoo! is continually refining its search algorithms and presentation (click the little “down” arrow at the top of the firm’s search results for additional categorizations and suggestions). And Microsoft’s third entry into the search market, the “decision engine” Bing, sports nifty tweaks for specific kinds of queries. Restaurant searches in Bing are bundled with ratings stars, product searches show up with reviews and price comparisons, and airline flight searches not only list flight schedules and fares, but also a projection on whether those fares are likely go up or down. Bing also comes with a one-hundred-million-dollar marketing budget, showing that Microsoft is serious about moving its search market share out of the single digits. And in the weeks following Bing’s mid-2009 introduction, the search engine did deliver Microsoft’s first substantive search engine market share gain in years. New tools like the Wolfram Alpha “knowledge engine” (and to a lesser extent, Google’s experimental Google Squared service) move beyond Web page rankings and instead aggregate data for comparison, formatting findings in tables and graphs. Web sites are also starting to wrap data in invisible tags that can be recognized by search engines, analysis tools, and other services. If a search engine can tell that a number on a restaurant’s Web site is, for example, either a street address, an average entrée price, or the seating capacity, it will be much easier for computer programs to accurately categorize, compare, and present this information. This is what geeks are talking about when they refer to the semantic Web. All signs point to more innovation, more competition, and an increasingly more useful Internet! Both Google and Microsoft are on a collision course. But there’s also an impressive roster of additional firms circling this space, each with the potential to be competitors, collaborators, merger partners, or all of the above. While wounded and shrinking, Yahoo! is still a powerhouse, ranking ahead of Google in some overall traffic statistics. Google’s competition with Apple in the mobile phone business prompted Google CEO Eric Schmidt to resign from Apple’s board of directors. Meanwhile, Google’s three-quarters-of-a-billion-dollar purchase of the leading mobile advertiser AdMob was quickly followed by Apple snapping up number two mobile ad firm Quattro Wireless for \$275 million. Add in eBay, Facebook, Twitter, Amazon, Salesforce.com, Netflix, the video game industry, telecom and mobile carriers, cable firms, and the major media companies, and the next few years have the makings of a big, brutal fight. • Strategic Issues Google’s scale advantages in search and its network effects advantages in advertising were outlined earlier. The firm also leads in search/ad experience and expertise and continues to offer a network reach that’s unmatched. But the strength of Google’s other competitive resources is less clear. Within Google’s ad network, there are switching costs for advertisers and for content providers. Google partners have set up accounts and are familiar with the firm’s tools and analytics. Content providers would also need to modify Web sites to replace AdSense or DoubleClick ads with rivals. But choosing Google doesn’t cut out the competition. Many advertisers and content providers participate in multiple ad networks, making it easier to shift business from one firm to another. That likely means that Google will have to retain its partners by offering superior value. Another vulnerability may exist with search consumers. While Google’s brand is strong, switching costs for search users are incredibly low. Move from Google.com to Bing.com and you actually save two letters of typing! Still, there are no signs that Google’s search leadership is in jeopardy. So far users have been creatures of habit, returning to Google despite heavy marketing by rivals. And in Google’s first decade, no rival has offered technology compelling enough to woo away the googling masses—the firm’s share has only increased. Defeating Google with some sort of technical advantage will be difficult, since Web-based innovation can often be quickly imitated. Google now rolls out over 550 tweaks to its search algorithm annually, with many features mimicking or outdoing innovations from rivals (Levy, 2010). The Google Toolbar helps reinforce search habits among those who have it installed, and Google has paid the Mozilla foundation (the folks behind the Firefox browser) upwards of \$66 million a year to serve as its default search option for the open source browser (Shankland, 2008). But Google’s track record in expanding reach through distribution deals is mixed. The firm spent nearly \$1 billion to have MySpace run AdSense ads, but Google has publicly stated that social network advertising has not been as lucrative as it had hoped (see Chapter 8 “Facebook: Building a Business from the Social Graph”). The firm has also spent nearly \$1 billion to have Dell preinstall its computers with the Google browser toolbar and Google desktop search products. But in 2009, Microsoft inked deals that displaced Google on Dell machines, and it also edged Google out in a five-year search contract with Verizon Wireless (Wingfield, 2009). How Big Is Too Big? Microsoft could benefit from embedding its Bing search engine into its most popular products (imagine putting Bing in the right-mouseclick menu alongside cut, copy, and paste). But with Internet Explorer market share above 65 percent, Office above 80 percent, and Windows at roughly 90 percent1 (Montalbano, 2009), this seems unlikely. European antitrust officials have already taken action against Redmond’s bundling Windows Media Player and Internet Explorer with Windows. Add in a less favorable antitrust climate in the United States, and tying any of these products to Bing is almost certainly out of bounds. What’s not clear is whether regulators would allow Bing to be bundled with less dominant Microsoft offerings, such as mobile phone software, Xbox, and MSN. But increasingly, Google is also an antitrust target. Microsoft has itself raised antitrust concerns against Google, unsuccessfully lobbying both U.S. and European authorities to block the firm’s acquisition of DoubleClick (Broach, 2007; Kawamoto & Broach, 2007). Google was forced to abandoned a fall 2008 search advertising partnership with Yahoo! after the Justice Department indicated its intention to block the agreement (Yahoo! and Microsoft have since inked a deal to share search technology and ad sales). The Justice Department is also investigating a Google settlement with the Authors’ Guild, a deal in which critics have suggested that Google scored a near monopoly on certain book scanning, searching, and data serving rights (Wildstrom, 2009). And yet another probe is investigating whether Google colluded with Apple, Yahoo! and other firms to limit efforts to hire away top talent (Buskirk, 2009). Of course, being big isn’t enough to violate U.S. antitrust law. Harvard Law’s Andrew Gavil says, “You’ve got to be big, and you have to be bad. You have to be both” (Lohr & Helft, 2009). This may be a difficult case to make against a firm that has a history of being a relentless supporter of open computing standards. And as mentioned earlier, there is little forcing users to stick with Google—the firm must continue to win this market on its own merits. Some suggest regulators may see Google’s search dominance as an unfair advantage in promoting its related properties such as YouTube and Google Maps over those offered by rivals (Vogelstein, 2009)—an advantage not unlike Microsoft’s use of Windows to promote Media Player and Internet Explorer. While Google may escape all of these investigations, increased antitrust scrutiny is a downside that comes along with the advantages of market-dominating scale.
textbooks/biz/Management/Book%3A_Information_Systems__A_Managers_Guide_to_Harnessing_Technology/14%3A_Google-_Search_Online_Advertising_and_Beyond/14.10%3A_The_Battle_Unfolds.txt
Nothing endures but change. Heraclitus, 535 – 475 BC Objectives After reading this chapter, you will be able to • Define terms related to project management • Discuss two essential qualities of good project managers • Explain the difference between geometric and living order • Define “project outcome,” “project success,” and “sustainability” in the context of a project’s full life cycle • Identify four roles of a project manager • Provide a basic introduction to Lean, including the six principles of Lean • Explain fundamentals of Agile software development, including sprints and product stories The Big Ideas in this Lesson • Living order thinking recognizes that a system of things is always in the process of remaking itself, and that a system is always—at some level—in a state of uncertainty. In project management, this suggests that projects happen in dynamic environments, and that unexpected events should be considered part of the project’s life cycle. It is fundamentally adaptive. • Project managers traditionally idealize the more prescriptive geometric order, in which one stage necessarily leads to the next stage. It is an essential component of project management, but when relied on exclusively, geometric order thinking can lead to inefficient and ineffective results. • Lean thinking focuses on eliminating waste. In project management, Lean thinking provides a way to focus on the customer’s definition of value, which is the only definition that matters. 1.1 Technical Project Management in the Modern World For a complete summary of the latest statistics on project management, see the Project Management Institute’s annual Pulse of the Profession report. You can read the 2018 version of the report here: PMI Pulse of the Profession 2018 Projects are by definition ephemeral—they come and go, depending on an organization’s needs, and eventually come to a close. According to the Cambridge English Dictionary, a project is “a piece of planned work or activity that is completed over a period of time and intended to achieve a particular aim ” (2018). The fleeting nature of projects means that organizations with less than optimal project management proficiency often fail to develop systematic processes for managing them. Once a project is completed, everyone moves on, gearing up for the next, with barely a backward glance at what did and didn’t work in the old project. In other words, these organizations lack a coherent approach to project management—“the application of processes, methods, knowledge, skills, and experience to achieve the project objectives” (Association for Project Management n.d.). This lack of a systematic approach is especially problematic in technical fields, leading to an extremely high rate of failure for technical projects across many industries. A quick web search on the success rate for technical projects offers some eye-opening facts and figures. Depending on which study you read, projects fail at rates between 20 and 80 percent. And throwing more money at the problem doesn’t help. Indeed, the higher a project’s budget, the more likely it is to fail (Mieritz 2012). One study found that IT projects with budgets over \$15 million are project management fiascos: “On average, large IT projects run 45 percent over budget and 7 percent over time” (Bloch, Blumberg and Laartz 2012). The verdict is even more sobering for industrial megaprojects—that is for industrial projects costing over \$1 billion. According to Edward W. Merrow, “The oil and gas production sector fares the worst; 78 percent of megaprojects in this industrial sector are classified as failures” (2011, 49). How can organizations find their way out of this morass? By creating a culture of systematic, professional project management that values the skills discussed throughout this book. Research consistently shows that organizations that implement technical project management techniques and processes reap a rich reward in project successes. This book focuses on developing an approach to technical project management that is flexible, adaptable, and open to new learning. It provides many practical suggestions but does not go into specific methods in detail. For guidance on the nuts and bolts of project management see Project Management: The Managerial Process, by Erik W. Larson and Clifford Gray. This 2.5-minute video from the Project Management Institute describes the advantages reaped by a financial institution that created a formal Project Management Office (PMO) to manage all of its IT projects: A YouTube element has been excluded from this version of the text. You can view it online here: pb.libretexts.org/web1/?p=40 2015 PMO of the Year Award Winner – Navy Federal Credit Union 1.2 Be a Hedgehog and a Fox Living Order, Some History Although Henri Bergson was the first thinker to use the term “living order,” the idea that life is a continual process of unpredictable change has ancient roots. The fragments that remain of the work of the pre-Socratic Greek philosopher, Heraclitus, provide profound insights into this idea. He famously reduced the ever-changing nature of the universe to this simple saying: “No man ever steps in the same river twice.” According to Heraclitus, nature is in a constant state of change. Indeed, change is the only constant a human can rely on. Religious thinkers have long recognized that change in our lives is inevitable and unavoidable. For instance, the idea of impermanence is a key component of Buddhism. The Bible, in Ecclesiastes 9:11 points out the futility of striving against the inevitable: Again I saw that under the sun the race is not to the swift, nor the battle to the strong, nor bread to the wise, nor riches to the intelligent, nor favor to those with knowledge, but time and chance overtake them all. As the scientific revolution took hold in Western thought, this understanding of living order went underground, only to reemerge in the work of early twentieth-century thinkers and artists like Bergson, in reaction to World War I and widespread industrialism. In his book Mastering the Leadership Role in Project Management: Practices that Deliver Remarkable Results, Alexander Laufer explains two essential qualities of a project manager, drawing on the hedgehog and the fox metaphor made famous by the philosopher Isaiah Berlin: The fox is a cunning and creative creature, able to devise a myriad of complex strategies for sneak attacks upon the hedgehog. The hedgehog is a painfully slow creature with a very simple daily agenda: searching for food and maintaining his home. Every day the fox waits for the hedgehog while planning to attack him. When the hedgehog senses the danger, he reacts in the same simple, but powerful, way every day: he rolls up into a perfect little ball with a sphere of sharp spikes pointing outward in all directions. Then the fox retreats while starting to plan his new line of attack for the next day. (Laufer 2012, 220) The advantage of the fox is that his complex understanding of the world allows him to try out many possibilities, adapting strategies and tactics in response to the current situation. Hedgehogs have one grand strategy that allows them to simplify all experience into “a single overall concept that unifies and guides everything they do.” As you will see in Lesson 2, where we discuss organizational strategy, the hedgehog approach is preferable when it comes to making long-term decisions about an organization’s future. But when it comes to project management, both strategies are powerful, and both can be effective, depending on your situation. Laufer argues that successful managers “behave both like hedgehogs and foxes, though they place the hedgehog in the driver’s seat.” Throughout this course, we will take a foxlike approach to technical project management by keeping an open mind and exploring the many lines of attack available in a particular project. But we will also place a hedgehog-like emphasis on a few basic principles—in particular the basic principles of Lean project management. At the same time, all our discussions will be informed by the distinction between the two fundamental approaches to project management: the traditional, or geometric-order approach, and the more adaptable, living-order approach. 1.3 Two Types of Order: Living and Geometric In his 1907 book Creative Evolution, the French philosopher Henri Bergson investigated the nature of human creativity and its relation to order. According to Bergson, by “order” we generally mean a mechanistic, predetermined, linear relationship between things. Event A leads to Event B; Event B leads to Event C; Event C leads to Event D; and so on, with no possibility of variation or adaptation. We also tend to consider creativity as something that arises only in a state of disorder—that is, when no type of order is evident. The free-thinking artist is a stereotype founded in this way of thinking. But Bergson argued that the disorder we associate with creativity is really just a different type of order (222-224). Again, we turn to Alexander Laufer, who has drawn some powerful insights on project management from Bergson’s work, which he summarizes as follows: Bergson claimed that there is no such thing as disorder, but rather two sorts of order: geometric and living order. While in ”geometric order” Bergson related to the traditional concept of order, in ”living order” he referred to phenomena such as the creativity of an individual, a work of art, or the mess in my office. (2012, 214) Throughout this book we will examine and compare geometric order to living order, with a goal of developing a creative, realistic, functional approach to project management. Qualities of Living and Geometric Order In project management, geometric order aligns with traditional managerial thinking. This concept of order is associated with predictable relationships between the stages of development, such as the relationships shown in Figure 1-1, with one stage necessarily leading to the next. Figure 1-1: Geometric order is predictable, with one stage leading to the next Project managers traditionally idealize such an orderly project progression. Indeed, it is the driving force behind the process-oriented approach to project management that organizations such as the Project Management Institute tend to focus on. It is an essential component of project management, and inexperienced project managers should start by mastering a geometric approach to their work. However, when relied on exclusively, geometric order thinking can lead to inefficient and ineffective results. By contrast, living order thinking recognizes that a system of things is always in the process of remaking itself, and that a system is always—at some level—in a state of uncertainty. In project management, this suggests that projects happen in dynamic environments, and that unexpected events should be considered part of a complex project’s life cycle. This is something that experienced project managers learn over time. But even inexperienced project managers can try to incorporate an understanding of living order into their work. Figure 1-2 illustrates the characteristics of living and geometric order. Keep in mind that a project can have qualities of both. Figure 1-2: Characteristics of living order and geometric order You might be called on to use geometric order methods in one situation and living order methods in another. For example, preparing a weather forecast on a typical day in San Diego, where the weather varies little from day to day, is a geometric order task. By contrast, preparing a weather forecast for the coast of Florida with a hurricane expected to hit shore five days in the future is a living order project. Often the planning stage occurs in living order. Then, as you begin to learn more about the project and what to expect, execution proceeds in geometric order. But when something unexpected happens, you could suddenly be plunged back into living order. You have to be prepared to move back and forth among geometric and living order techniques, adapting to the situation as necessary. Traditional project management processes are founded on the presumption that a project can be planned down to the smallest detail, and that after the planning phase is complete, the project manager’s job is to execute the project according to that plan, without surprises. The reality of the modern world is quite different. In his 1991 book, Managing as a Performing Art: New Ideas for a World of Chaotic Change Permanent, Peter Vaill argued that today’s organizations actually operate in a state of “permanent white water.” Alexander Laufer describes Vaill’s argument as follows: In using the “permanent white water” metaphor, Vaill calls our attention to the fact that the external environment of contemporary projects is full of surprises, tends to produce novel problems, and is “messy” and ill-structured. (2012, 214) Throughout this book, we will focus on ways to manage technical projects in a permanent white-water world. Predicting the Unpredictable Geometric Order, Some History Geometric order is a product of the Scientific Revolution. Beginning in the mid-1500s, humans harnessed the power of systematic thought to achieve giant leaps forward in mathematics, physics, biology, and many other areas of study. Thinkers from numerous countries contributed to the advances that made modern science possible, but perhaps the most important was Isaac Newton, who modeled a universe predicated on systematic, unchanging laws whose effects could be accurately predicted by mathematical equations. The major human achievements of the last five centuries could not have occurred without this type of systematic thought. Modern life as we know it would not exist without it. However, as project managers, whose major concern is planning projects that take place over time, we need to understand the limits of our ability to predict the future in an ever-changing world. Anything that involves human beings doing anything over a period of time, with limited resources, involves a certain amount of unpredictability. This means that projects are inevitably shaped by living order. You might think that you have a good handle on what to expect from your co-workers and project stakeholders throughout the course of a project, but often the traits you might consider the most predictable turn out to be completely unreliable. Not too long ago, this realization shook up the field of economics, which was founded on the assumption that humans were totally predictable in their tendency to make choices that enhance their financial well-being. In his groundbreaking work in the field of behavioral economics, Richard H. Thaler demonstrated that the supposedly irrefutable idea that people act rationally in their own self-interest is debatable at best, and probably untrue (Knee 2015). And yet economists consistently refuse to take the unreliability of their basic precept into account. According to Thaler, “Economists discount any factors that would not influence the thinking of a rational person. These things are supposedly irrelevant. But unfortunately for the theory, many supposedly irrelevant factors do matter” (2015). Thaler goes onto argue that “unless you are Spock,” supposedly irrelevant things, such as how you feel about saving for retirement, can have far more profound effects on your economic behavior than mere self-interest (2015). Successful project managers succeed, in part, because they never ignore the power of supposedly irrelevant things to affect project outcomes. Since we can safely assume that the vast majority of your future teammates will not be Vulcans, you should probably also assume that supposedly irrelevant things will end up having unforeseen effects on the projects you manage. You never know what living order will throw your way as a project unfolds. That’s not to say that, as a project manager, you can dispense with the expectations of geometric order. Quite the contrary. Because this is a book on technical project management, our thinking will necessarily be concerned with geometric order. After all, technical projects involve technical products and services whose performance is governed by predictable laws. Gravity always works the same way, so engineers have to take that into account. The latest computer processors can only work so fast in today’s environment, and so software developers have to take that into account, too. However, you need to guard against the tendency to think that because technical products and services are themselves predictable, the projects required to produce them will be equally predictable. That is simply not the case. Because this is a book on management, an endeavor that involves people performing tasks over time, our thinking will be deeply rooted in living order. 1.4 A Project’s Life Cycle and Living Order When you open your eyes to the ever-changing nature of living order, you can begin to appreciate the potential for change inherent throughout a project’s life cycle. The same is true for the result of a project—whether that’s a building, a new smartphone, or software for farm machinery. The product, or result, of a project is created, maintained, adapted, updated, and demolished/retired by various projects during its life. Each of these projects is subject to living-order uncertainty, magnifying the difficulty of predicting what the result of a project will look like in the future, and whether it will indeed turn out to be suitable for its intended purpose. This, in turn, complicates the planning phase for any project, particularly for things that are ultimately judged by how easily it can be dismantled and disposed of, or recycled and reused. For example, Figure 1-3 shows the life cycle of a building. The first stage, the making stage, is the domain of the project manager who oversees the building’s construction. Once the building is complete, the project manager moves on to other work, but the building of course has just begun its functional life. During the operating/using/changing stages, the building’s occupants either benefit or suffer from the project manager’s decisions throughout the making stage. Next comes the retirement/reuse stage, in which the building will likely be demolished and something else built in its place. At that point, the entire life cycle starts again. The ease with which these stages unfold depend, at least in part, on choices made by the project manager during the making stage. Only by understanding these later stages can you properly understand the true nature of a project and make decisions that will ensure it produces something of enduring value. Figure 1-3: The product, or result, of a project is created, maintained, adapted, updated, and demolished/retired by various projects during its life In software development, the time periods between stages are shorter than in construction. As in construction, the operating stage is by far the costliest part of the life cycle process. However, when a software product continues to be used beyond its designed operational life, the operating costs can rise at accelerating rates. In any industry, thinking about the life cycle of the result of a project changes how you think about project metrics. As shown in Figure 1-4 , what seems like a good choice from within the limited confines of the making stage might seem foolish when viewed from the broader perspective of the full life cycle. Figure 1-4: Each life cycle stage raises new questions about the success of the initial, making-stage project Project Outcome and Project Success In its narrowest sense, the term project outcome refers to a project’s measurable output in terms of scope, cost, schedule, quality, and other issues such as safety. In a broader sense, the term also refers to the impact a project has compared to its larger goals. In this sense, we take the community perspective, taking into account, for instance, the project’s multiple use potential and eventual redevelopment. For example, in the narrowest sense, the desired project outcome of a proposed sports arena might be a multi-use indoor facility built according to the planned scope, cost, schedule, and so on. However, in the broader sense, the desired project outcome might be redevelopment and revitalization of the surrounding area. For a humorous look at the many ways that project stakeholders can define project success, see “What the Client Wanted” – Arek Fressadi The term project success refers to the degree to which a project is done well, with stakeholders having varying definitions of success over time, depending on their perspectives. In other words, the evaluation of a project’s success is a subjective judgement; different stakeholders will have different initial ideas about a project’s overall success based on their own expectations and objectives. To make things more complicated, over time, stakeholders will likely revise their ideas on the project’s success to take into account new information about how the project outcome actually functions in the real world. The changing definition of project success is especially important to keep in mind throughout disruptive projects such as home remodels and road reconstructions. For example, commuters might have an extremely low opinion of an interchange construction when they are suffering through the frustrations of traffic backups and detours. Later, when the interchange is complete, and traffic is flowing more quickly than ever before, they are likely to rate the project’s overall success very high. In the consumer products world, customers looking for a new wireless device might base their idea of project success on ease of use and reliability, whereas the company producing the device might rate project success based on number of units sold. Meanwhile, the industry as a whole might only rate the project a success if it sets a new technical standard. If you limit your perspective to the making stage, it’s easy to think that the terms “project outcome” and “project success” are synonymous. For example, suppose you’re hired to build an energy efficient house according to Leadership in Energy & Environmental Design (LEED) standards. If, at the end of the making stage, you see that your team completed the structure on time and on budget, with all the specified LEED features, then you would probably consider that outcome a success. But as the house enters the Operating/Using/Changing stage, information about the home’s energy use might change your ideas about the success or failure of the project. If in fact the LEED features do not function as expected, then you would probably rate the project’s overall success rather low. Perhaps more importantly, the home’s owner would be unlikely to consider the home a success. And depending on the longevity estimates and ever-changing external factors, the lifespan of the house might also be significantly different than originally projected. Put simply, project success is defined by doing the project well and meeting defined objectives. Project outcome also encompasses whether you and your team did the right thing. It’s important to consider both at multiple levels—for individual tasks, for the overall project, and for the impact of the project over its life. In every case, we need to think broadly about the factors contributing to a project’s success or failure. We risk losing enduring value if we draw too tight a fence around the boundaries we consider when planning and assessing project success. Sustainability and Living Order The ideal project manager has empathy for the people who will be using and modifying the completed project in the future, even the people who will ultimately demolish or recycle it. Ideally, this means incorporating materials that can ultimately be recycled. Indeed, in the European Union, automobile manufacturers are required by law to reduce the non-recyclable waste generated by an end-of-life vehicle (ELV) to 5%. This way of thinking necessitates a more complicated view of a product’s life cycle, as shown in Figure 1-5 (Kanari, Pineau and Shallari 2003). Figure 1-5: Product life cycle can ultimately include recycling portions of the product (Source: “End-of-Life Vehicle Recycling in the European Union,” N. Kanari, J.-L. Pineau, and S. Shallari, The Member Journal of the Minerals, Metals & Materials Society www.tms.org/pubs/journals/jom...nari-0308.html. Copyright 2003 by The Minerals, Metals & Materials Society. Used with permission.) Sustainability efforts inspired by a recognition of the realities of living order are well underway in the construction industry. As Lance Hosey, a Washington architect, has argued, sustainable construction means, among other things, creating buildings that can be easily disassembled, minimizing the disruption and contamination inherent in the demolition process (2005). Software developers, too, can develop sustainable software by, for example, writing code that runs even on outdated hardware, thereby minimizing the amount of computer equipment that ends up in landfills (Green Wiki 2015). In addition to being sustainably designed, software also has the potential to promote other sustainability efforts, as discussed in the report Software Accelerates Sustainable Development, published by the nonprofit organization Business for Social Responsibility (2008). David Pagenkopf, director of Application Development and Integration at UW-Madison, has this to say about sustainability and software design: The use of virtualization techniques has largely eliminated hardware as a material factor in software design. The most important issue in software sustainability is selecting the software languages, tools, and design architecture that ensure that software is maintainable for as many years as possible. One of the best examples is writing software that works entirely in a web browser, which can then work across multiple platforms. Even better is writing software using responsive design that automatically adjusts to the user’s end device (e.g. mobile phone, tablet, or laptop/desktop) to present the best possible interface to the user. (pers. comm., August 25, 2015) Consumer products are subject to an ever-increasing array of sustainability expectations. As Bryan Burrough wrote in the New York Times, Wal-Mart reduced “packaging size across its producing lines, saving the company an estimated \$3.4 billion a year…while reducing trash“ (Burrough 2011). Over a decade of effort has resulted in sustainability initiatives that “are having a real impact today. The company has strategically used its scale to its advantage to enact change within as well as outside the organization” (Atamian 2017). 1.5 Four Roles of a Project Manager So what does all this talk about change and unpredictability mean, practically speaking, for a real-life project manager? Throughout this book, we will be investigating ways to accommodate the realities of living order in the daily tasks associated with technical project management. For example, in Lesson 6, we’ll talk about pull planning, an adaptive, recursive form of planning that prioritizes regular updating to reflect current conditions. But for now, let’s talk about some general principles for successful project management in a living order world. In an article for MIT Sloan Management Review, Alexander Laufer, Edward Hoffman, Jeffrey Russell, and Scott Cameron show how successful project managers combine traditional management methods with newer, more flexible approaches to achieve better outcomes (2015). Their research shows that successful project managers adopt these four vital roles: 1. Develop collaboration among project participants: “Most projects are characterized by an inherent incompatibility: the various parties to the project are loosely coupled, whereas the tasks themselves are tightly coupled. When unexpected events affect one task, many other interdependent tasks are quickly affected. Yet the direct responsibility for these tasks is distributed among various loosely coupled parties, who are unable to coordinate their actions and provide a timely response. Project success, therefore, requires both interdependence and trust among the various parties” (Laufer et al. 2015, 46). 2. Integrate planning with learning: “Project managers faced with unexpected events employ a ‘rolling wave’ approach to planning. Recognizing that firm commitments cannot be made on the basis of volatile information, they develop plans in waves as the project unfolds and information becomes more reliable. With their teams, they develop detailed short-term plans with firm commitments while also preparing tentative long-term plans with fewer details” (Laufer et al. 2015, 46). 3. Prevent major disruptions: Successful project managers “never stop expecting surprises, even though they may effect major remedial changes only a few times during a project. They’re constantly anticipating disruptions and maintaining the flexibility to respond proactively…. When change is unavoidable, a successful project manager acts as early as possible, since it is easier to tackle a threat before it reaches a full-blown state” (Laufer et al. 2015, 47). 4. Maintain forward momentum: “When unexpected events affect one task, many other interdependent tasks may also be quickly impacted. Thus, solving problems as soon as they emerge is vital for maintaining work progress” (Laufer et al. 2015, 48). Adopting these four roles will set you on the road toward delivering more value in your projects, with less waste, which is also the goal of both Lean project management and Agile project management. 1.6 Lean: Eliminating Waste in Living Order Traditional, geometric order project management is often inefficient, leading to wasted time, money, resources, and labor. By contrast, Lean is a business model and project management philosophy that offers a means to streamline projects while allowing for the flexibility required to deal with unexpected events. Based on ideas and practices developed at Toyota after World War II, it emphasizes creating value for the customer while eliminating waste through the efficient flow of work from one phase of a project to another. More than anything, Lean is a way of thinking. In their essential book on the topic, James P. Womack and Daniel T. Jones describe Lean thinking as follows: It provides a way to specify value, line up value-creating actions in the best sequence, conduct these activities without interruption whenever someone requests them, and perform them more and more effectively. In short, Lean thinking is Lean because it provides a way to do more and more with less and less—less human effort, less equipment, less time, and less space—while coming closer and closer to providing customers with exactly what they want. (2003, 15) We’ll be discussing Lean extensively throughout this book. To get started here, we’ll focus on two fundamental Lean ideas: value and waste. Value In ordinary conversation, “value” is a generic term that refers to the overall worth or usefulness of something. But in Lean, value is only meaningful “when expressed in terms of a specific product (a good or a service, and often both at once) which meets the customer’s needs at a specific price at a specific time” (Womack and Jones, 16). In other words, value is defined by the customer, not by the manufacturer, the contractor, or the service provider—and definitely not by the engineer responsible for designing the product. Integrated Project Delivery In reading about project management and Lean, you might come across the term integrated project delivery (IPD). Inspired by Lean thinking, IPD is a means of contractually aligning stakeholders in a construction project in a way that emphasizes close collaboration, with the goal of delivering value as defined by the customer. One feature of IPD is a type of contract known as a multi-party agreement, which explains each participant’s role in the project. A related methodology, integrated product delivery, was developed as a reaction against the silo-ed approach to product development, in which the design team designed a product, and then “threw it over the wall” to the manufacturing team, which had to figure how to build the product with no input into its design. Integrated product development’s more collaborative approach improves time-to-market, while fostering innovation. This sounds simple, but it can be a difficult concept for engineers, with all their technical expertise, to embrace. In their book, Womack and Jones include a chapter on Porsche, which suffered a sales collapse in the mid 1980’s largely because its world-class engineers had blinded themselves to their customers’ definition of value: Designs with more complexity produced with ever more complex machinery were asserted to be just what the customer wanted and just what the production process needed…. It often became apparent that the strong technical functions and highly trained technical experts leading German firms obtained their sense of worth—their conviction that they were doing a first-rate job—by pushing ahead with refinements and complexities that were of little interest to anyone but the experts themselves…. Doubts about proposed products were often countered with claims that “the customer will want it once we explain it,” while recent product failures were often explained away as instances where “the customers weren’t sophisticated enough to grasp the merits of the product.” (2003, 17) This is only one example of the kinds of preconceptions that can distort a company’s understanding of the value it is supposedly producing for the benefit of the customer. Womack and Jones provide in-depth case studies detailing the forces that can prevent a company from understanding what its customers actually want: The definition of value is skewed everywhere by the power of preexisting organizations, technologies, and undepreciated assets, along with outdated thinking about economies of scale. Managers around the world tend to say, “This product is what we know how to produce using assets we’ve already bought, so if customers don’t respond we’ll adjust the price or add bells and whistles.” What they should be doing instead is fundamentally rethinking value from the perspective of the customer. (2003, 17-18) To make the leap into Lean thinking, you need to fully grasp the nature of value, which is why we will return to this idea throughout this book. You also need to understand its opposite—waste. The whole goal of Lean is to maximize value and eliminate waste. Waste According to the Lean Lexicon, waste[1] is “Any activity that consumes resources but creates no value for the customer” (Lean Enterprise Institute 2014). Identifying waste can be as difficult for new Lean thinkers as identifying value. Taiichi Ohno, the Toyota executive who pioneered the focus on waste and value that we now call Lean, identified seven forms of waste. You can find countless explanations of the seven wastes in books and articles. The following is adapted from 7 Wastes”- Lean Manufacturing Tools: • Transportation: Moving people, machinery, or materials farther than is really necessary. A huge amount of transportation waste is necessitated by poor factory layouts, large batch sizes, and distant storage locations, just to name a few causes. • Inventory: A build-up of stock due to, for example, poor planning, or the time required to change over machinery from one process to another. • Motion: Any movement of humans or equipment that does not increase the value of a product or service. Examples include bending and reaching necessitated by a poorly designed work station, or by badly organized storage areas. • Waiting: Humans or machines standing idle. Can be caused by long changeovers, poorly coordinated processes, or the need to rework flawed parts, among other things. • Overproduction: Creating more than can be used or sold in a reasonable time. This is considered the worst form of waste, because “it obscures all of the other problems within your processes”(Lean Manufacturing Tools n.d.). Later in this book, we’ll talk about how to avoid this form of waste through Lean techniques such as pull planning. • Over-processing: Doing more than is useful or necessary from the point of view of the customer. The over-engineering at Porsche, described by Womack and Jones, is a clear example of over-processing. A more mundane example might be a restaurant that uses expensive imported cheese on pizzas, when customers would actually prefer domestic mozzarella. • Defects: Time and effort required to correct defective parts or poorly rendered services. This is what most people think of when asked to identify waste. But it can be hard to accurately gauge the costs associated with defects, which can include “costs associated with problem solving, materials, rework, rescheduling materials, setups, transport, paperwork, increased lead times, delivery failures, and potentially lost customers”(Lean Manufacturing Tools). Another way to think about waste is to focus on how easily it can be eliminated. When looked at that way, it falls into two types: • Type one waste: Creates “no value but is unavoidable with current technologies and production assets” (Lean Enterprise Institute 2014). This kind of waste is necessary but might be eliminated in the future. An example of type one waste might be routine inspections required to ensure that a particular part meets government safety standards. While necessary, such inspections don’t actually provide value from the customer’s point of view, and might conceivably be eliminated if the part itself was eliminated from the device, or if it was redesigned. • Type two waste: Creates no value and can be eliminated immediately. For example, the time and effort required to transport a newly made microwave oven to the packaging machine is waste that could be eliminated by moving the packaging machine. This blog provides some real-life examples of the seven forms of waste in a variety of industries: “Real Examples of the 7 Wastes of Lean” – KaiNexus In project management, an example of type one waste might be an audit necessary to measure the performance of contracted work or a promised product against an agreed-upon set of requirements. From the customer’s perspective, such an audit adds no value, but it is necessary to ensure the successful completion of the project. A type two waste often seen in project management is constant requests for status updates. New project managers who haven’t yet built trust with their team sometimes succumb to this form of waste as they try to micro-manage all tasks. Regularly scheduled updates and escalation expectations for unexpected challenges helps to eliminate this type two waste in project management. Six Lean Principles The six principles at the heart of Lean thinking are: specify value, identify the value stream, flow, pull, perfection, and respect for people. You’ll learn more about these ideas, and how they relate to technical project management, throughout this book. We’ll explain them briefly here, to give you a foundation to work from. Most of the examples in this lesson are drawn from manufacturing, where Lean got its start. But keep in mind that Lean thinking has been widely adopted in industries as diverse as construction and healthcare. 1. Identify value: As explained earlier, value can only be defined by the customer. As a project manager, you have to start by learning what that definition is—ideally by talking directly to the customer. However, you may find that customers “only know how to ask for some variant of what they are already getting”(Womack and Jones 2003, 31). This means that identifying value often entails asking probing questions designed to elicit a definition of value from customers who may not have had the opportunity to think it through themselves. Often, the best questions to start with are: What problem do you want to solve? What does success look like? 2. Map the value stream: The value stream is “all of the actions, both value-creating and nonvalue-creating, required to bring a product from concept” to delivery(Lean Enterprise Institute 2014). In any industry, the vast majority of activities in the value stream create no value and are therefore waste. Firms that attempt to analyze the value stream for a particular product typically have to look far beyond their own premises, taking into account everything involved in bringing a product to market. For example, the value stream for a new type of bread might begin with groundwater used to irrigate wheat farms in Nebraska. When looking at value streams from a project management perspective, the goal is to understand all aspects of the project, including initiation, planning, executing, monitoring & control, and closeout. Batch and Queue: The Opposite of Flow Suppose you set up a batch and queue system for baking, frosting, decorating, and boxing a hundred birthday cakes at a bakery. As you bake the cakes, you need to come up with a good storage solution, so they stay fresh until you are ready to frost them. After all the cakes are frosted and you move onto the decorating step, you might find that the decorations don’t stick to the frosting. If you had just bake, frosted, and decorated one cake, you would have discovered and solved this problem before it became a defect affecting all the cakes. But since you have already baked and frosted all the cakes, you’re now in the position of having to buy new decorations that will stick to the frosting already on the cake. A similar problem could arise with the boxes. You might have a hundred cakes frosted and decorated, ready to be boxed, and then discover they don’t fit in the boxes you ordered. Meanwhile, the cakes are getting stale, and might soon become unsellable. 3. Continuous flow: According to Womack and Jones, most people tend to think the most efficient way to complete any multi-step project is to divide it into batches—performing the first step on all available materials and setting the results aside until all the materials have been processed. After the entire batch has been completed, you then move onto the next step, processing the entire batch, and so on, through all the steps. This approach, known as batch and queue, can be useful in many situations, but it’s often wildly inefficient and can lead to defects that aren’t detected until many steps into the process (Womack and Jones 2003, 22). To avoid the problems associated with batch and queue production, Lean emphasizes continuous flow from one step to the next, with small batches that can be immediately processed by workers at the next step. True continuous flow, which dramatically reduces production time, is only possible after you have eliminated the waste of non-value creating steps, and then rearranged the remaining steps so that they unfold one after the other. It is not a realistic goal in all industries, but you can often achieve some benefits of flow by moving machinery and relocating personnel. In project management, flow can become an issue during scheduling. For example, a project team might make the mistake of laboring over an unnecessarily detailed schedule with overly discrete time blocks, planning tasks for a multi-year project in hours. Then, after wasting all this time on an unrealistic schedule, the team might fail to review and update actual progress as the project progresses. This lack of flow can present real risks to the project’s overall success. By contrast, Lean-thinking project managers understand that a schedule is a living document, and that, throughout a project, they’ll need to address living order changes (both positive and negative), allowing for true flow throughout the life of the project. 4. Pull: To understand the meaning of pull, you first have to understand the meaning of push. Traditional production systems are considered push systems, with work dictated by production schedules which are sometimes tied to accurate forecasts of customer demand, but often are not. A push system easily results in the waste of over-production. Mark Graban offers some examples: • A fast food restaurant making food and storing it under heat lamps (some of it gets thrown away). • An automaker building an excess number of cars or trucks and forcing the dealers to take them. • The U.S. Mint producing dollar coins that far exceed customer demand. • Computer makers building product and shipping it to retailers to sit on the shelves. (2014) By contrast, a pull system builds products and uses up materials based on actual customer demand, the way a sandwich shop might make your favorite turkey and guacamole sandwich after you order it. In reality, most systems use a combination of pull and push production. For example, making up your sandwich on the spot is a pull activity, but the store would probably have practiced push production by previously ordering turkey and guacamole according to forecasts of customer demand. These examples greatly simplify the Lean concept of pull. In practice, especially in giant factories or on construction sites, it is far more complex. But the basic, waste-reducing principle is always the same: “no one upstream should produce a good or service until the customer downstream asks for it” (Womack and Jones 2003, 67). This excellent blog post uses a concession stand example to illustrate the way a pull system avoids the wastes of inventory and over-production by keeping only small quantities of materials on-hand, replacing items as they are used: Lean Blitz Consulting – “Toyota Way Principle #3: “Pull” Systems” Translating the concept of pull to project management can be difficult but yields powerful results. The team starts with the end-point—the ultimate goal of the project—and pulls activities forward, describing what must be done each step of the way. This differs greatly from standard project management, which starts from the beginning, building a schedule that assumes previous tasks are completely finished before the team starts on the next tasks. By contrast, in a Lean schedule, more tasks overlap. In a presentation on project planning for a transmission line design, Kristine Engel explains that, in a pull schedule, “the ‘later tasks’ may start before ‘earlier’ tasks end,” and that some “design refinement tasks may overlap with the labor bid process or even construction, which reflects the reality of utility projects.” Despite this fluidity, the team is able to track progress by comparing billed hours to the budget. The whole system is built on “regular communication with the client to revise short-term goals in relation to the full project timeline” (Engel 2017). 5. Perfection: Experienced practitioners of Lean testify that, as you get better at identifying the customer’s definition of value, you become more precise in identifying every step in the value stream. As a result, you reduce the waste of non-value adding activity, thereby improving flow. As you gain experience, you’ll start to see opportunities for Lean improvements everywhere. It’s like lifting weights—the more you lift, the more you can lift. The more waste you eliminate, the more waste you can eliminate. According to Jones and Womack, this happens because the four initial principles interact with each other in a virtuous circle. Getting value to flow faster always exposes hidden muda [waste] in the value stream. And the harder you pull, the more the impediments to flow are revealed so they can be removed. Dedicated product teams in direct dialogue with customers always find ways to specify value more accurately and often learn of ways to enhance flow and pull as well. (2003, 25) 6. Respect for People: Above all else, Lean requires constant communication among all stakeholders. Implementing the first five principles of Lean is only possible when all team members respect and listen to each other, share ideas, accept suggestions, and collaborate to solve problems and eliminate waste. Respect for people is not about being nice—it’s about understanding that you can’t solve problems on your own, and that instead you need to engage sincerely and honestly with co-workers. Sometimes that means challenging them and offering criticism. It always means being willing to admit when you’re wrong. Push and Pull This two-minute video explains the difference between push and pull production: A YouTube element has been excluded from this version of the text. You can view it online here: pb.libretexts.org/web1/?p=40 1.7 Agile: Fast Feedback in Living Order Lean was originally developed in the world of manufacturing but has been adopted in many industries. In the world of software development, a related methodology, Agile, is becoming increasingly popular. It is essentially an IT version of Lean. Although Agile had its roots in software development, companies have also expanded its use into a variety of project types, including product development, capital projects, and service projects. The many flavors of Agile include: • Agile Scrum: Designed for completing complex projects, as described on ScrumGuides, Scrum is the most widely used form of Agile. When people talk about Agile, they are usually talking about Scrum. • Extreme Programming: Emphasizes short development cycles with frequent releases of software for evaluation, after which a new development cycle begins. You can read more about extreme programming at “Extreme Programming: A Gentle Introduction • Rapid Product Development: Emphasizes “simultaneous, coordinated activities by multi-functional teams, striving for smooth transitions between phases for the most rapid time-to-market” (ORC International: Expert Advisory Services). You can read more about Rapid Product Development in this “Introduction to Rapid Product Development.” • Another Form of Agile Hackathons, another type of Agile experience, are multiday events in which software developers work on a solution to a specific problem with the goal of generating a number of innovative ideas and/or prototypes. Hackathons are similar to Agile sprints, but typically involve more intensive collaboration, with participants gathering in one place and dividing up into teams. Originating as a way for anyone to get involved in creating open-source software, hackathons are now common on college campuses and in the corporate world. For information about upcoming hackathons, see this site dedicated to the topic: HackerEarth. • All forms of Agile emphasize an iterative approach to product development, with the project specifications evolving along with the customer’s notion of the software requirements. A project starts with a conversation between the developer and the product owner about what the customer wants the software to do. In Scrum terminology, the customer is the product owner, and the features that the product owner wants in the software are known as product stories. With a description of the product stories in hand, the Agile developer gets to work, creating pieces of software that address individual product stories. After a one- to two-week cycle of development (known in Scrum as a sprint) the developer hands off the new software to the product owner so she can try it out and make suggestions for improvement. The developer then begins another sprint, incorporating those suggestions into a new iteration. After every sprint, the product owner has the chance to redirect the team to new product stories, or to revise the team’s understanding of the existing product stories. Through these repeated interactions, which provide fast, focused feedback, the developer and the product owner zero in on a software application that does what the product owner needs it to do. If time and money are tight, as they often are, the product owner has regular opportunities to make choices about which product stories are the most important, and which can be dispensed with if necessary. Agile development is essentially a learning process through which the developer and the product owner create a shared understanding of how many features they can create, given the allotted time and money. It’s very much a living order approach to project management, in that the early stages involve some ambiguity and many unknowns. According to Robert Merrill, a Senior Business Analyst at the University of Wisconsin-Madison, and an Agile coach, “Agile is a way to manage projects in the face of unpredictability and constraints—often very rigid time and budget constraints. The fast feedback allows the team to create the best possible software within the given constraints” (2017). Like Lean, Agile will be a recurring topic throughout this book. To get started learning about Agile on your own, see the following: Agile: A New Kind of Engineering In his fascinating lecture “Real Software Engineering,” Glenn Vanderburg presents the history of software engineering (2011). He explains how early software developers tended to think of software engineering in terms that were familiar to them from structural engineering, because that’s what they thought the term engineering meant. Vanderburg advocates a new, simple definition of engineering: whatever works. History tells us that what used to be called software engineering actually had little to do with engineering, because so-called “software engineering projects” were riddled with waste, rework, and failure. In other words, it didn’t work. According Vanderburg, Agile is the only real form of software engineering. It is fundamentally different from structural engineering, in part because it allows for instantaneous, essentially-free testing—something that is impossible when building planes or bridges. Also, whereas other types of engineering typically involve modeling something over a long period of weeks or months, and then getting feedback, also over weeks or months, Agile developers receive feedback over different time scales. For individual blocks of code, they can get important feedback in minutes or hours by simply sharing it with another developer or with the customer. For larger parts of the project, such as acceptance testing or a release of new features, getting feedback is more expensive and takes place over weeks or months. The main reason feedback and testing in Agile differs so much from other types of engineering is that the source code is itself the model. By writing code, Agile developers create both the testable model and the final product at the same time. In Vanderburg’s words: “Agile processes are economical, cost-tuned feedback engines.” ~Practical Tips • Be prepared to use both geometric and living order techniques: Projects are often conceived and planned in geometric order, with the naïve assumption of events unfolding predictably. Then reality hits, and they are executed amidst the uncertainties of living order. However, eventually, as projects unfold, and you begin to learn what to expect, they can become more geometric. Be prepared to move back and forth among geometric and living order techniques, adapting to the situation as necessary. • When working in geometric order, focus on the following: • Define project success. • Establish a project timeline. • Ensure the project delivers the specified results. • Constantly check your progress against the project schedule. • Regularly check costs against the project budget. • Periodically pause to make sure the project really is unfolding in geometric order and hasn’t shifted to living order. • When working in living order, follow good geometric practices when appropriate, but also focus on the following: • Ensure that all stakeholders understand the project’s shared value and are committed to achieving it. • Incorporate every useful form of communication to make sure project stakeholders understand what’s going on at every stage of the project. • Focus on activities that create value and eliminate wasteful activities. • Be prepared to respond to changing events, staying agile and adaptable. • Take the time to identify the unique and changing context of a project: A project’s context—the day-to-day environment and the larger organizational background in which a project unfolds—is rarely the same from one project to the next, and can change throughout the course of the project. By identifying the unique context of each project, and the many ways it could change, you’ll reduce your chances of making assumptions that could turn out to be wrong. ~Summary • Projects unfold in unique and changing contexts that call for a flexible, adaptable approach. • Organizations often conceive projects in the unpredictable upheaval of living order and then attempt to execute them in the more systematic geometric order, planning every step down to the last detail. Successful project managers never lose sight of the unpredictable, permanent whitewater world in which projects actually unfold. • Understanding that a project’s life cycle involves more than just the making stage will expand your understanding of “project success.” • Lean project management focuses on maximizing value and eliminating waste. • Agile project management strategies encourage the flexibility required in living order. ~Glossary • Agile—A project management methodology that emphasizes an iterative approach to product development, with the project specifications evolving along with the customer’s notion of the software requirements. There are many flavors of Agile, but the most widely used is Scrum. • behavioral economics—According to OxfordDictionaries.com, “a method of economic analysis that applies psychological insights into human behavior to explain economic decision-making.” • geometric order—A type of order identified by the French philosopher Henri Bergson that is characterized by linear development, clear cause and effect, and predictable events. • integrated project delivery (IPD)—A means of contractually aligning stakeholders in a construction project in a way that emphasizes close collaboration, with the goal of delivering value as defined by the customer. IPD is inspired by Lean and relies on a type of contract known as a multi-party agreement, which explains each participant’s role in the project. • Lean—A business model and project management philosophy that offers a means to streamline projects while allowing for the flexibility required to deal with unexpected events. It emphasizes the elimination of waste through the efficient flow of work from one phase of a project to another. • living order—A type of order identified by the French philosopher Henri Bergson that is characterized by rapid change and unpredictable events. • project— A “piece of planned work or activity that is completed over a period of time and intended to achieve a particular aim”(Cambridge English Dictionary 2018). • project outcome—In its narrowest sense, a project’s measurable output—whether that’s a building, a software application, or a part for a fighter jet. In a broader sense, the impact a project has compared to its larger goals. • project success—The degree to which a project is done well. Stakeholders’ evaluation of project success is a subjective judgement, varying depending on their perspective, and typically changes over time. • project management—The “application of processes, methods, knowledge, skills, and experience to achieve the project objectives” (Association for Project Management 2018). • value—In ordinary conversation, a generic term that refers to the overall worth or usefulness of something. But in Lean, value is only meaningful “when expressed in terms of a specific product (a good or a service, and often both at once) which meets the customer’s needs at a specific price at a specific time” (Womack and Jones 2003, 16). In other words, value is defined by the customer. ~Additional Resources • The Guide to Lean Enablers for Managing Engineering Programs, published by the Joint MIT PMI INCOSE Community of Practice on Lean in Program Management (2012). • Managing as a Performing Art: New Ideas for a World of Chaotic Change, by Peter B. Vaill (1989). In this book Vaill introduces the term “permanent whitewater.” • Richard Thaler’s memoir of his life and work in the field of behavioral economics: Misbehaving: The Making of Behavioral Economics (2015). • The classic introduction to Lean: The Toyota Way: 14 Management Principles from the World’s Greatest Manufacturer, by Jeffrey Liker(2004). ~References Association for Project Management. n.d. “What is project management?” apm.org. Accessed June 15, 2018. https://www.apm.org.uk/resources/wha...ct-management/. Atamian, Luna. 2017. “Why is Walmart a sustainability leader?” Huffington Post, December 14. https://www.huffingtonpost.com/entry...b00caf3d59eae8. Bergson, Henri. 1911. “Creative Evolution.” Project Gutenberg. Accessed June 15, 2018. http://www.gutenberg.org/files/26163...-h/26163-h.htm. Bloch, Michael, Sven Blumberg, and Jürgen Laartz. 2012. “Delivering Large-Scale IT Projects on Time, on Budget and on Value.” McKinsey&Company. October. Accessed August 4, 2016. http://www.mckinsey.com/business-fun...t-and-on-value. Burrough, Bryan. 2011. “Behind the Greening of Wal-Mart.” New York Times, May 14. http://www.nytimes.com/2011/05/15/bu...helf.html?_r=0. Business for Social Responsibility. 2008. “Software Accellerates Sustainable Development.” bsr.org. http://www.bsr.org/reports/BSR_Softw...evelopment.pdf. Cambridge English Dictionary. 2018. “project.” Cambridge Dictionary. Accessed May 12, 2018. https://dictionary.cambridge.org/us/...nglish/project. Engel, Kristine. 2017. “Project Planning for Transmission Line Design.” PowerPoint presentation for class in Technical Project Management, University of Wisconsin-Madison, October 12. Graban, Mark. 2014. “#Lean: Clarifying Push, Pull, and Flow in a Hospital; the Patient “Pulls”.” Mark Graban’s Lean Blog. February 24. https://www.leanblog.org/2014/02/flo...in-a-hospital/. Green Wiki. 2015. “Sustainable Code Development.” Green Wiki. June 9. http://green.wikia.com/wiki/Sustaina...de_development. Hosey, Lance. 2005. “More Constructive Ways to Build a City.” The Washington Post, January 9. www.washingtonpost.com/wp-dyn...-2005Jan7.html. Kanari, N., J.L. Pineau, and S. Shallari. 2003. “End-of-Life Vehicle Recycling in the European Union.” Journal of the Minerals, Metals & Materials Society, August. Accessed June 9, 2015. www.tms.org/pubs/journals/jom...nari-0308.html. Knee, Jonathan A. 2015. “In ‘Misbehaving,’ an Economics Professor Isn’t Afraid to Attack His Own.” New York Times, May 5. http://www.nytimes.com/2015/05/06/bu...aler.html?_r=1. Larson, Erik W., and Clifford F. Gray. 2011. Project Management: The Managerial Process, Sixth Edition. New York: McGraw-Hill Education.Laufer, Alexander. 2012. Mastering the Leadership Role in Project Management: Practices that Deliver Remarkable Results. Upper Saddle River: FT Press. Laufer, Alexander, Edward J. Hoffman, Jeffrey S. Russell, and Scott W. Cameron. 2015. “What Successful Project Managers Do.” MIT Sloan Management Review, Spring: 43-51. http://sloanreview.mit.edu/article/w...t-managers-do/. Lean Enterprise Institute. 2014. Lean Lexicon, Fifth Edition. Edited by Chet Marchwinski. Cambridge, MA: Lean Enterprise Institute. Lean Manufacturing Tools. n.d. “Waste of Defects; causes, symptoms, examples and solutions.” Lean Manufacturing Tools. Accessed November 11, 2017. http://leanmanufacturingtools.org/12...and-solutions/. —. n.d. “Waste of Overproduction; causes, symptoms, examples and solutions.” Lean Manufacturing Tools. Accessed November 11, 2017. http://leanmanufacturingtools.org/11...and-solutions/. Merrill, Robert, interview by Ann Shaffer. 2017. Senior Business Analyst, University of Wisconsin-Madison (October 2). Merrow, Edward W. 2011. Industrial Megaprojects: Concepts, Strategies, and Practices for Success. Hoboken, New Jersey: John Wiley & Sons, Inc. Mieritz, Lars. 2012. “Gartner Survey Shows Why Projects Fail.” This Is What Good Looks Like. June 10. https://thisiswhatgoodlookslike.com/...projects-fail/. ORC International: Expert Advisory Services. n.d. Rapid Product Development Experts. Accessed September 9, 2016. http://www.orcexperts.com/experts.as...ct+development. Pagenkopf, David. 2018. “Email on sustainability and software design.” Thaler, Richard H. 2015. “Unless You Are Spock, Irrelevant things Matter in Economic Behavior.” New York Times, May 8. http://www.nytimes.com/2015/05/10/up...abt=0002&abg=0. Vanderburg, Glenn. 2011. “Lone Star Ruby Conference 2010 Real Software Engineering by Glenn Vanderburg.” Published October 24, 2011. YouTube video, 51:56. https://www.youtube.com/watch?v=NP9A...ature=youtu.be Womack, James P., and Daniel T. Jones. 2003. Lean Thinking: Banish Waste and Create Wealth in Your Corporation, Revised and Updated. New York: Free Press. 1. In a nod to the origins of Lean, the Japanese word for waste, muda is often used in publications about Lean.
textbooks/biz/Management/Book%3A_Technical_Project_Management_in_Living_and_Geometric_Order_(Russell_Pferdehirt_and_Nelson)/1.01%3A_Project_Management_Foundations-_Principles_and_Practices.txt
Strategy is making trade-offs in competing. The essence of strategy is choosing what not to do. — Michael E. Porter, “What is Strategy?” (1996) Objectives After reading this chapter, you will be able to • Define terms related to strategy and portfolios • Discuss basic concepts related to strategy • Distinguish between strategy and operational effectiveness • Explain what makes executing a strategy difficult • Discuss issues related to aligning projects with strategy • Explain why killing projects can be hard, and suggest ways to identify a poorly conceived project The Big Ideas in this Lesson • Strategy is a plan to provide something customers can’t get from competitors. Strategy means focusing on what the organization does best. It should be motivated by customer pull, rather than organizational push. • An organization’s strategy should govern everything it does, guiding project selection and execution, and, when necessary, project termination. • Strategy is different from operational effectiveness—cutting costs and increasing efficiency. Operational effectiveness is an important tool. But founding an organization’s strategy entirely on operational effectiveness is a losing game because the competition will always catch up eventually. 2.1 Do the Right Thing Effective project management and execution start with choosing the right projects. While you might not have control over which projects your organization pursues, you do need to understand why your organization chooses to invest in particular projects so that you can effectively manage your projects and contribute to decisions about how to develop and, if necessary, terminate a project. Your study of technical project management will primarily focus on doing things the right way. In this lesson, we’ll concentrate on doing the right thing from the very beginning. As always, it’s helpful to start with some basic definitions: • project: The “temporary initiatives that companies put into place alongside their ongoing operations to achieve specific goals. They are clearly defined packages of work, bound by deadlines and endowed with resources including budgets, people, and facilities” (Morgan, Levitt and Malek 2007, 3). Note that this is a more expansive definition than the Cambridge English Dictionary definition introduced in Lesson 1—“a piece of planned work or activity that is completed over a period of time and intended to achieve a particular aim”—because in this lesson we focus on the tradeoffs necessitated by deadlines and limited resources. • program: “A cluster of interconnected projects” (Morgan, Levitt and Malek 2007, 9). • portfolio: The “array of investments in projects and programs a company chooses to pursue” (Morgan, Levitt and Malek 2007, 3). • strategy: According to Merriam-Webster, “a careful plan or method for achieving a particular goal usually over a long period of time.” As shown in Figure 2-1, a portfolio is made up of programs and projects. An organization’s strategy is the game plan for ensuring that the organization’s portfolios, programs, and projects are all directed toward a common goal. Figure 2-1: Relationship between a portfolio, programs, and projects 2.2 The Essence of Strategy Many books and articles attempt to explain what the term “strategy” really means. But in the end, as Mark Morgan, Raymond E. Levitt, and William Malek explain in Executing Your Strategy: How to Break it Down & Get it Done, an organization’s strategy is defined by what the organization invests in—that is, what the organization does: “the best indicator of strategic direction and future outcomes is an enterprise-wide look at what the company is doing rather than what it is saying—what the strategy makers are empowering people at the execution level to accomplish” (2007, 3). An organization without a clearly defined strategy can never expect to navigate the permanent whitewater of living order. This is especially if the strategy is motivated by the organization attempting to push its vision onto customers, rather than pulling the customer’s definition of value into its daily operations . An organization’s strategy is an expression of its mission and overall culture. In a well-run company, every decision about a project, program, or portfolio supports the organization’s strategy. The strategy, in turn, defines the company’s portfolio and day-to-day operations. Projects and their budgets flow out of the organizational strategy. Morgan et al. emphasize the importance of aligning a company’s portfolio with its strategy: Without clear leadership that aligns each activity and every project investment to the espoused strategy, individuals will use other decision rules in choosing what to work on: first in, first out; last in, first out; loudest demand; squeakiest wheel; boss’s whim; least risk; easiest; best guess as to what the organization needs; most likely to lead to raises and promotion; most politically correct; wild guess—or whatever they feel like at the time. Portfolio management still takes place, but it is not necessarily aligned with strategy, and it occurs at the wrong level of the organization. (2007, 5) As a project manager, you should be able to refer to your organization’s strategy for guidance on how to proceed. You should also be able to use your organization’s strategy as a means of crossing possibilities off your list. Michael E. Porter, author of the hugely influential book Competitive Strategy, explains that strategy is largely a matter of deciding what your organization won’t do. In an interview with Fast Company magazine, he puts it like this: The essence of strategy is that you must set limits on what you’re trying to accomplish. The company without a strategy is willing to try anything. If all you’re trying to do is essentially the same thing as your rivals, then it’s unlikely that you’ll be very successful. It’s incredibly arrogant for a company to believe that it can deliver the same sort of product that its rivals do and actually do better for very long. That’s especially true today, when the flow of information and capital is incredibly fast. (Hammonds 2001) Ultimately, strategy comes down to making trade-offs. It’s about “aligning every activity to create an offering that cannot easily be emulated by competitors” (Porter 2001). Southwest Airlines, which has thrived while most airlines struggle, is often hailed as an example of a company with a laser-like focus on a well-defined strategy. Excluding options from the long list of possibilities available to an airline allows Southwest to focus on doing a few things extremely well—specifically providing reliable, low-cost flights between mid-sized cities. As a writer for Bloomberg View puts it: By keeping the important things simple and implementing them consistently, Southwest manages to succeed in an industry better known for losses and bankruptcies than sustained profitability. Yet none of this seems to have gone to the company’s head, even after 40 years. As such, the airline serves as a vivid—and rare—reminder that size and success need not contaminate a company’s mission and mind-set, nor erode the addictive enthusiasm of management and staff. (El-Erian 2014) 2.3 Operational Effectiveness is not Strategy In his writings on strategy, Michael Porter takes pains to distinguish between strategy and operational effectiveness—getting things done faster and more cheaply than competitors. Managers tend to confuse these two very different things. A well-defined strategy focuses on what sets an organization apart from the competition—what it can do uniquely well. Operational effectiveness—working faster and cutting costs and then cutting them again—is a game anyone can play. But it’s not viable over the long term because competitors will always catch up: It’s extremely dangerous to bet on the incompetence of your competitors—and that’s what you’re doing when you’re competing on operational effectiveness. What’s worse, a focus on operational effectiveness alone tends to create a mutually destructive form of competition. If everyone’s trying to get to the same place, then, almost inevitably, that causes customers to choose on price. (Hammonds 2001) Michael Porter’s influential article “What is Strategy?” explains the difference between operational effectiveness and strategy, using the success of Southwest Airlines as a real-life example (1996). Subscribers to the Harvard Business Review can read the complete article here: “What is Strategy? Porter published Competitive Strategy in 1980. Since then the business world has changed considerably, becoming faster paced, with more projects unfolding in living order. Some suggest that, in an environment of constant change, picking one strategy and sticking to it is a recipe for disaster. Porter argues that the opposite is true. The secret is to focus on “high-level continuity” that coordinates the assimilation of change: The thing is, continuity of strategic direction and continuous improvement in how you do things are absolutely consistent with each other. In fact, they’re mutually reinforcing. The ability to change constantly and effectively is made easier by high-level continuity. If you’ve spent 10 years being the best at something, you’re better able to assimilate new technologies. The more explicit you are about setting strategy, about wrestling with trade-offs, the better you can identify new opportunities that support your value proposition. Otherwise, sorting out what’s important among a bewildering array of technologies is very difficult. Some managers think, “The world is changing, things are going faster—so I’ve got to move faster. Having a strategy seems to slow me down.” I argue no, no, no—having a strategy actually speeds you up. (Hammonds 2001) 2.4 Lean and Strategy As you learned in Lesson 1, the Lean approach to project management focuses on eliminating waste and maximizing customer value. It is primarily a means of streamlining operational effectiveness, but it also offers major strategic benefits. In a truly Lean organization, managers have the time and autonomy to focus on high-level issues. The emphasis on flexibility makes it easier for a Lean organization to pivot to new opportunities that align with the organization’s strategy. In an article for Planet Lean, Michael Ballé explains: Lean is often reduced to a manufacturing tactic because it doesn’t fit the frame of traditional strategy. Lean can’t tell you which niche to pursue, it can’t help you build a roadmap, and it won’t tell you what reasonable objectives are or how to incentivize people to get them. Lean, however, is the key to creating dynamic strategies built on more mindful care of customers, more dynamic objectives (reduce the waste by half every year), faster learning, greater involvement of all people all the time for stronger morale, more determined focus on higher-level goals and quicker exploitation of unexpected opportunities. (2016) According to Ballé, a Lean strategy might look like this: 1. Know your customers and follow their changing expectations; 2. Choose the improvement dimensions to put dynamic pressure on the market (by driving the pressure on your own operations); 3. Learn operational performance faster than your competitors; 4. Develop managers’ autonomy and keep the focus on the bigger issues; 5. Follow through quickly on unexpected gains. (2016) Because implementing Lean effectively requires a buy-in from an entire organization, with everyone from the top down learning to think Lean, succeeding with Lean is difficult. That means the organizations that do succeed have something rare to offer their customers, setting them apart from the competition. In other words, converting an organization to Lean methodologies can be a strategy in and of itself. Truly Lean organizations are first and foremost learning organizations, with a focus on learning everything possible about the market and their customers’ needs. This makes them vastly superior at supplying the value customers want. In their book The Lean Strategy, Ballé, Jones, Chaize, and Fiume make the case for Lean as something more than a means toward operational effectiveness. They see it as a true strategy: Lean strategy represents a fundamentally different approach: seeing the right problems to solve, framing the improvement directions such that every person understands how he or she can contribute, and supporting learning through change after change at the value-adding level in order to avoid wasteful decisions. Sustaining an improvement direction toward a North Star and supporting daily improvement to solve global challenges make up a strategy, and a winning one. (2017, x) 2.5 Why is Executing a Strategy So Hard? Despite the clear advantages of creating and sticking with a strategy, organizations and individual managers have difficulty doing so: Corporations spend about \$100 billion a year on management consulting and training, most of it aimed at creating brilliant strategy. Business schools unleash throngs of aspiring strategists and big-picture thinkers into the corporate world every year. Yet studies have found that less than 10 percent of effectively formulated strategies carry through to successful implementation. So something like 90 percent of companies consistently fail to execute strategies successfully. (Morgan, Levitt and Malek 2007, 1) Why is executing a strategy so difficult? According to Porter, one problem is that managers often mistakenly think that making tradeoffs is a sign of weakness: Trade-offs are frightening, and making no choice is sometimes preferred to risking blame for a bad choice…. The failure to choose sometimes comes down to the reluctance to disappoint valued managers or employees. (Porter 1996) Project Failure Close to Home One example of a failed IT project is the UW-Madison’s attempt to implement a new payroll and benefits system. The University finally halted the project in July 2006 after spending \$26 million. System Executive Senior Vice President Don Mash said “We just found it very, very difficult. We probably underestimated the complexity of it when we started way back when” (Foley 2006). You can read more about the project here: “\$26 Million Software Scrapped by UW System: Officials Regret the Loss But Believe A Different Vendor Will Prove Better In the Long Run.” As the Nobel Prize winning economist Herbert Simon demonstrated, in many situations, it is not realistic or even possible to collect all the information necessary to determine the optimal solution. He uses the word satisfice (a combination of satisfy and suffice) to describe a more realistic form of decision-making, in which people accept “the ‘good-enough’ solution rather than searching indefinitely for the best solution” (Little 2011). In order to stay true to its strategy, a successful organization will often choose to satisfice, instead of optimize. Organizations are also very susceptible to the sunk cost fallacy, which is the tendency “to continue investing in a losing proposition because of what it’s already cost…” (Warrell 2015). Managers will often shy away from making alterations to the organization’s strategy if such alterations necessitate cutting projects that have already received significant investment—even if the projects themselves are widely considered to be failures. At the same time, corporations have to consider how killing a project will affect its earnings. As a result, project managers, fearing they’ll be pinned with responsibility for driving down their company’s stock price, refuse to kill projects they know have no chance of succeeding. As an example, let’s consider how killing a project might play out in the IT world, as explained by David Pagenkopf: Most of the work to implement an IT project for internal use must be capitalized, with the cost amortized over the expected life of the software (typically, five years). However, if an IT project is terminated, then the entire cost of the project must be expensed that year. That won’t affect the organization’s cash-flow, but it could materially affect the income statement for that year. For large, multi-year projects, a major write-off can drive net income to a loss and devastate the stock price. In short order, the project manager and the sponsor will likely be looking for a new job. So it’s no surprise people tend to “kick the can” down the road, which of course only magnifies the eventual problem. I once had this problem in a portfolio of projects I inherited when starting a new job. After firing both the internal project manager and the contractor project manager, I had to find a way to put lipstick on a pig and get some value from the project to avoid a write-down. (Note: It is never good when the CFO of a Fortune 500 company has direct interest in an IT project.) My point is that people sometimes continue to make poor investment decisions not because they don’t know better, but rather to buy time, or to escape, sometimes by shifting the blame to someone else.[1] 2.6 Aligning Projects with Strategy Through Portfolio Management Projects are the way organizations operationalize strategy. In the end, executing a strategy effectively means pursuing the right projects. In other words, it’s a matter of aligning projects and initiatives with the company’s overall goals. And keep in mind that taking a big-picture, long-term approach to executing a new organizational strategy requires a living order commitment to a certain amount of uncertainty in the short term. It can take a while for everyone to get on board with the new plan, and in the meantime operations may not proceed as expected. But by keeping your eye on the North Star of your organization’s strategy, you can help your team navigate the choppy waters of change. Project selection proceeds on two levels: the portfolio level and the project level. On the portfolio level, management works to ensure that all the projects in a portfolio support the organization’s larger strategy. In other words, management focuses on optimizing its portfolio of projects. According to Morgan et al., portfolio optimization is “the difficult and iterative process of choosing and constantly monitoring what the organization commits to do” (2007, 167). Morgan et al. see portfolio management as the heart and soul of pursuing a strategy effectively: Strategic execution results from executing the right set of strategic projects in the right way. It lies at the crossroads of corporate leadership and project portfolio management—the place where an organization’s purpose, vision, and culture translate into performance and results. There is simply no path to executing strategy other than the one that runs through project portfolio management. (2007, pp. 4-5) To manage portfolios effectively, large organizations often use scenario-planning techniques that involve sophisticated quantitative analysis. One such technique is based on the knapsack problem, a classic optimization problem. Various items, each with a weight and a value are available to be placed in a knapsack. As shown in Figure 2-2, the challenge is to choose the types and numbers of items that can be fit into the knapsack without exceeding the weight limit of the knapsack. Portfolio managers are faced with a similar challenge: choosing the number and types of projects, each with a given cost and value, to optimize the collective value without exceeding resource availability. Figure 2-2: Large organizations often use scenario-planning techniques like the knapsack problem On the project level, teams focus on selecting, refining, and then advancing or, if necessary, terminating individual projects. Some compliance-related projects have to be completed no matter what. But companies typically generate far more ideas for new projects than they can reasonably carry out. So to optimize its portfolio, every organization needs an efficient process for capturing, sorting, and screening ideas for new projects, and then for approving and prioritizing projects that are ultimately green-lighted. We’ll look at some project-selection methods shortly. But first, let’s look at some things that influence project selection. Factors that Affect Project Selection In any organization, project selection is influenced by the available resources. When money is short, organizations often terminate existing projects and postpone investing in new ones. For example, in 2015, the worldwide drop in oil prices forced oil companies to postpone \$380 billion in projects, such as new deep-water drilling operations (Scheck 2016). An organization’s project selection process is also influenced by the nature of the organization. At a huge aerospace technology corporation, for example, the impetus for a project nearly always comes from the market, and is loaded with government regulations. Such projects are decades-long undertakings, which necessarily require significant financial analysis. At a consumer products company, the idea for a project often originates inside the company as a way to respond to a perceived consumer demand. In that case, with less time and fewer resources at stake, the project selection process typically proceeds more quickly. Size is a major influence on an organization’s project selection process. At a large, well-established corporation, the entrenched bureaucracy can impede quick decision-making. By contrast, a twenty-person start-up can make decisions quickly and with great agility. Value and Risk Keep in mind that along with the customer’s definition of value comes the customer’s definition of the amount of risk he or she is willing to accept. As a project manager, it’s your job to help the customer understand the nature of possible risks inherent in a project, as well as the options for and costs of reducing that risk. It’s the rare customer who is actually willing or able to pay for zero risk in any undertaking. In some situations, the difference between a little risk and zero risk can be enormous. This is true, for instance, in the world of computer networking, where a network that is available 99.99% of the time (with 53 minutes and 35 seconds of down time a year) costs much less than a network that is 99.999% available (with only 5 minutes and 15 seconds of down time a year) (Dean 2013, 645). If you’re installing a network for a small chain of restaurants, shooting for 99.99% availability is a waste of time and money. By contrast, on a military or healthcare network, 99.999% availability might not be good enough. Identifying the magnitude and impact of risks, as well as potential mitigation strategies, are key elements of the initial feasibility analysis of a project. Decision-makers will need that information to assess whether the potential value of the project outweighs the costs and risks. Risk analysis will be addressed further in Lesson Eight. For some easy-to-digest summaries of the basics of risk management, check out the many YouTube videos by David Hillson, who is known in the project management world as the Risk Doctor. Start with his video named “Risk management basics: What exactly is it?” Effects of Poor Portfolio Management Organizations that lack an effective project selection process typically struggle with four major portfolio-related issues. In an article for Research Technology Management, Robert G. Cooper, Scott J. Edgett, and Elko J. Kleinschmidt describe these issues as follows: 1. Resource balancing: Resource demands usually exceed supply, as management has difficulty balancing the resource needs of projects with resource availability. 2. Prioritizing projects against each other: Many projects look good, especially in their early days, and thus too many projects “pass the hurdles” and are added to the active list. Management seems to have difficulty discriminating between the Go, Kill, and Hold projects. 3. Making Go/Kill decisions in the absence of solid information: Up-front evaluations of viability are substandard in projects, the result being that management is required to make significant investment decisions, often using very unreliable data. No wonder so many of their decisions are questionable! 4. Too many minor projects in the portfolio: There is an absence of major revenue generators and the kinds of projects that will yield significant technical, market, and financial breakthroughs. (2000) These problems can lead to a host of related issues. One common issue is related to capacity, which is defined as follows: Capacity is the maximum level of output that a company can sustain to make a product or provide a service. Planning for capacity requires management to accept limitations on the production process. No system can operate at full capacity for a prolonged period; inefficiencies and delays make it impossible to reach a theoretical level of output over the long run. (Investopedia n.d.) While it might sound desirable for an organization to be running at full capacity, using every available resource, in fact such a situation usually leads to log jams, making it impossible for projects to proceed according to schedule. It’s essential to leave some capacity free—typically 20% to 30% is considered desirable–for managing resources and dealing with the inevitable unexpected events that arise in living order. Attempting to execute too many projects, and therefore using up too much capacity, can generate the following problems: Fuzzy Front End The earliest stage of product development is sometimes referred to as the Fuzzy Front End (FFE), or sometimes stage 0 or the ideation stage. This stage precedes the official New Product Development stage. According to a blog post for SmartSheet, FFE is “considered one of the best opportunities for driving innovation in a company. FFE is not frequently mapped in any formal way, since this is the phase where you pitch all of your great ideas for solutions to your customer’s problems. FFE is called fuzzy because it occurs before any formal development starts, in the vague period where little structure or defined direction exists. Very few products that are originally pitched in FFE come out of it; however, this stage of pre-development is critical. Successful completion of pre-development can take you seamlessly into development” (n.d.). You can read the complete blog post here: “What Is New Product Development? 1. Time to market starts to suffer, as projects end up in a queue, waiting for people and resources to become available…. 2. People are spread very thinly across projects. With so many “balls in the air,” people start to cut corners and execute in haste. Key activities may be left out in the interest of being expedient and saving time. And quality of execution starts to suffer. The end result is higher failure rates and an inability to achieve the full potential of would-be winners…. 3. Quality of information on projects is also deficient. When the project team lacks the time to do a decent market study or a solid technical assessment, often management is forced to make continued investment decisions in the absence of solid information. And so projects are approved that should be killed. The portfolio suffers. 4. Finally, with people spread so thinly across projects, and in addition, trying to cope with their “real jobs” too, stress levels go up and morale suffers. And the team concept starts to break down. (Cooper, Edgett and Kleinschmidt 2000) The Project Selection Process No matter the speed at which its project selection process plays out, successful organizations typically build in a period of what Scott Anthony calls “staged learning,” in which the project stakeholders expand their knowledge of potential projects. In an interesting article in the Harvard Business Review, Anthony compares this process to the way major leagues use the minor leagues to learn more about the players they want to invest in. In the same way, consumer product companies use staged learning to expose their products to progressively higher levels of scrutiny, before making the final, big investment required to release the product to market (Anthony 2009). You can think of the project selection process as a series of screens that reduce a plethora of ideas, opportunities, and needs to a few approved projects. From all available ideas, opportunities, and needs, the organization selects a subset that warrant consideration given their alignment with the organization’s strategy. As projects progress, they are subjected to a series of filters based on a variety of business and technical feasibility considerations. As shown in Figure 2-3, projects that pass all screens are refined, focused, and proceed to execution. Figure 2-3: A project selection process can be seen as a series of screens This same concept is applied in Stage-Gate™ or phase-gate models, in which a project is screened and developed as it passes through a series of stages/phases and corresponding gates. During each stage/phase, the project is refined, and at each gate a decision is required as to whether the project warrants the additional investment needed to advance to the next stage/phase of development. “The typical Stage-Gate new product process has five stages, each stage preceded by a gate. Stages define best-practice activities and deliverables, while gates rely on visible criteria for Go/Kill decisions” (Cooper, Edgett and Kleinschmidt 2000). The approach to project selection that emphasizes killing unviable projects early is neatly expressed by the Silicon Valley mantra “Fail early, fail often.” As Dominic Basulto argues in a blog post for the Washinton Post: The future of innovation is in learning how to fail. That may sound counter-intuitive, but if you look at several of the recent trends in innovation — everything from rapid prototyping to the common Internet practice of releasing products early in beta — they are all about making rapid, iterative adjustments that uncover tiny failures and then correcting them more quickly than one’s competitors… Companies must find new ways to move failure to the beginning, rather than the end, of the innovation cycle. Put another way: Would you rather fail when your product hits the market after years of hard work and millions of dollars in sunk costs, or fail earlier when you have less to lose? (Basulto 2012) This approach is designed to help an organization make decisions about projects about which very limited knowledge is available at the outset. The initial commitment of resources is devoted to figuring out if the project is viable. After that, you can decide if you are ready to proceed with detailed planning, and then, whether to implement the project. This process creates a discipline of vetting each successive investment of resources and allows safe places to kill the project if necessary. Another approach to project selection, set-based concurrent engineering,avoids filtering projects too quickly, instead focusing on developing multiple solutions through to final selection just before launch. This approach is expensive and resource-hungry, but its proponents argue that the costs associated with narrowing to a single solution too soon—a solution that subsequently turns out to be sub-optimal—are greater than the resources expended on developing multiple projects in parallel. Narrowing down rapidly to a single solution is typical of many companies in the United States and in other western countries. Japanese manufacturers, by contrast, emphasize developing multiple options (even to the point of production tooling). For more on set-based project selection, see this article in MIT Sloan Management Review: Toyota’s Principles of Set-Based Concurrent Engineering.” In an article for the International Project Management Association, Joni Seeber discusses some general project selection criteria. Like Michael Porter, she argues that first and foremost, you should choose projects that align with your organization’s overall strategy. She suggests a helpful test for determining whether a project meaningfully contributes to your organization’s strategy: A quick and dirty trick to determining the meaningfulness of a project is answering the question “So what?” about intended project outcomes. The more the project aligns with the strategic direction of the organization, the more meaningful. The higher the likelihood of success, the more meaningful. To illustrate, developing a vaccine for HIV is meaningful; developing a vaccine for HIV that HIV populations cannot afford is not. Size matters as well since the size of a project and the amount of resources required are usually positively correlated. Building the pyramids of Egypt may be meaningful, but the size of the project makes it a high stakes endeavor only suitable to pharaohs and Vegas king pins. (2011) Seeber also suggests focusing on projects that draw on your organization’s core competencies: The term sunk cost fallacy refers to the tendency “to continue investing in a losing proposition because of what it’s already cost…” (Warrell 2015). See this article for a quick introduction to this essential human weakness, which can cause managers to extend failing projects long after they are no longer viable: “Sunk-Cost Bias: Is it Time to Call it Quits? Core competencies are offerings organizations claim to do best. An example of a core competency for Red Cross, for example, would be international emergency disaster response. Projects based on core competencies usually achieve outcomes with the best value propositions an organization can offer and, therefore, worthwhile for an organization to map its core competencies and select projects that build on strength. (2011) For more ideas on project selection criteria, see this video, recommended by Seeber in her article: “Project Selection Criteria.” This article summarizes the problems that arise when organizations attempt to respond to every customer request by launching new projects willy nilly, thereby exceeding its overall capacity: “Saying ‘No’ to Customers. Beware of Cognitive Biases A cognitive bias is an error in thinking that arises from the use of mental shortcuts known as heuristics. As Amos Tversky and Daniel Kahneman demonstrated in their ground-breaking study of decision-making, we all use heuristics to quickly size up a situation (1974). For example, we might use the availability heuristic to refer to the first similar situation that comes to mind, and then, using that situation as a reference, make judgements about the current situation. While that can be effective, it can also lead to misconceptions and illusions known as cognitive biases. Just because something readily comes to mind does not mean it is relevant to your current situation. When making an important decision, watch out for these common cognitive biases: • Confirmation bias: Paying attention only to information that confirms your preconceptions. • Groupthink: Adopting a belief because a significant number of people already hold that belief. • Conservatism: Weighting evidence you are already familiar with more heavily than new evidence. • Stereotyping: Assuming an individual will match the qualities supposedly associated with the group to which the individual belongs. Take some time to read up on the topic, starting with this overview of well-documented cognitive biases: “20 Cognitive Biases That Screw Up Your Decisions.” 2.7 Knowing When to Say No The more screens or gates a project passes through, the more you learn. Eventually, what you learn about the project might lead you to conclude the project is not viable. By this point, however, people have become invested in the project. They naturally want it to succeed and are therefore unable to perceive the downsides clearly. In other words, they suffer from a cognitive bias known as groupthink, which causes people to adopt a belief because a significant number of people already hold that belief. This problem can be exacerbated if an especially forceful or charismatic person has taken on the role of the project’s chief advocate, or project champion, in the early stages of evaluation. If the project champion then transitions into becoming the project manager, killing the project can be even harder, once the project manager becomes absorbed in the technical details and loses track of the larger organizational issues (Kerzner and Kerzner, 24). At this point, it is often wise to appoint an exit champion, or a manager who is charged with advocating the end of a project if he or she thinks that is in the best interests of the organization, regardless of the desires of the project team members. Even if your organization doesn’t allow for an officially designated exit champion, it probably has some sort of project selection process that includes points at which a project can be killed. As a project manager, you need to understand that process, follow it carefully, and make sure everyone involved feels free to say the words “We need to kill this project.” When deciding whether to kill a project, pay attention to the following red flags, which, according to Joni Seeber, often signal a poorly conceived project: 1. Lack of strategic fit with mission 2. Lack of stakeholder support 3. Unclear responsibility for project risks 4. Risks outweigh potential benefits 5. Unclear time component 6. Unrealistic time frame, budget, & scope 7. Unclear project requirements 8. Unattainable project requirements or insuperable constraints 9. Unclear responsibility for project outcomes (2011) In Lesson 12, we’ll focus on auditing, the systematic evaluation of a project designed to help a team decide whether to proceed or call it quits. ~Practical Tips • Tie every project to your organization’s strategy: Make a conscious effort to connect your organization’s strategy to every project you manage, with the goal of helping all stakeholders understand their larger purpose. For example, your organization might settle on a strategy of pursuing government contracts. This would involve learning everything about the very geometric-order world of government contracts, which requires careful adherence to the details of an RFP, and then pursuing government contracts systematically. An organization taking this approach would have a far greater chance of success than one that occasionally pursued government contracts, without making any serious attempt to learn the in’s and out’s of such work. • Identify the decision-makers in your organization: An effective project manager understands which people in an organization actually have the influence to make a project happen and addresses the interests and concerns of those decision-makers. • Understand your organization’s project selection process: It’s important to understand how your organization decides which projects to take on, because that’s critical to how you go about seeking approval for your project and how you present it to decision makers. • Learn all you can about your organization’s project selection criteria: In many organizations, the criteria for project selection are not always clear and quantitative. Seek opportunities to engage with colleagues and managers who can help you better understand how and by whom decisions about project selection and continuation are made. • Be ready to adapt to a change in strategy: Implementing an organizational strategy requires discipline and tradeoffs. Ideally, upper management monitors the effectiveness of the strategy, just as a project manager monitors a project, and makes changes when necessary. If externalities force your organization to change its strategy, you have to be ready to adapt. • Accept that a green-lighted project could be cancelled at a later stage: A project might get a green light during the project selection process, only to be terminated later by another decision maker, who might be an officially designated exit champion, or might be someone who simply isn’t interested in the project. In either case, as always in living order, you need to be flexible and adapt. • Be mindful of how your project ties in to related projects: The interdependence of projects can affect an organization’s portfolio strategy. One project may not have value except in relation to one or more others. Keep in mind that it may be necessary to execute all or none of a cluster of related projects. • Be mindful of the importance of having key personnel available: Often, the biggest constraint on projects is getting key personnel assigned and working. • Keep in mind the relationship between strategy and scope: When discussing altering the scope of a project, take some time to determine if the altered scope conflicts with your organization’s strategy. If it does, then it’s probably not a good idea. ~Summary • Effective project management starts with selecting the right projects, managing them within a program of connected projects, and within a portfolio of all the organization’s projects and programs. • An organization’s strategy is an expression of its unique mission in the market, setting it apart from competing organizations. Every decision about a project, program, or portfolio should support the organization’s strategy. The strategy, in turn, should define the company’s portfolio and day-to-day operations. Management must be willing to make trade-offs, pursuing some projects and declining or killing others in order to stay true to its strategy. • According to Michael Porter, operational effectiveness—working faster, and cutting costs—is not the same as strategy. Competing on operational effectiveness alone is not viable over the long term, because competitors will always catch up (Hammonds 2001). • Experts on strategy point out several reasons why executing a strategy is so difficult. One problem is that managers tend to think trade-offs to be signs of weakness. Organizations are also susceptible to the sunk cost fallacy, refusing to kill projects that don’t align with company strategy just because they’ve already spent money on them. • Aligning an organization’s portfolio of projects to its overall strategy involves difficult choices about trade-offs and project selection. Organizations that lack an effective project selection process typically struggle with four major portfolio-related issues: resource balancing, prioritizing projects, making decisions about which projects to execute and which to kill, and having too many minor projects in a portfolio. • Many models have been proposed to describe the most common project selection process, in which many ideas are evaluated, with only a few actually proceeding to project execution. Whatever project selection process your organization employs, it should focus on selecting projects that align with the organization’s strategy. • To stay true to its strategy, an organization must be prepared to kill projects. This can be difficult, especially if the project’s chief advocate, the project champion, is forceful or has transitioned into becoming the project manager, and so is absorbed in the day-to-day details of the project. ~Glossary • capacity—The “maximum level of output that a company can sustain to make a product or provide a service. Planning for capacity requires management to accept limitations on the production process. No system can operate at full capacity for a prolonged period; inefficiencies and delays make it impossible to reach a theoretical level of output over the long run” (Investopedia n.d.). • exit champion—A manager who is charged with advocating the end of a project if he or she thinks that is in the best interests of the organization, regardless of the desires of the project team members. • groupthink—A type of cognitive bias that causes people to adopt a belief because a significant number of people already hold that belief. • operational effectiveness— Any kind of practice which allows a business or other organization to maximize the use of their inputs by developing products at a faster pace than competitors or reducing defects, for example (BusinessDictionary.com). • portfolio optimization—The “difficult and iterative process of choosing and constantly monitoring what the organization commits to do” (Morgan, Levitt, & Malek, 2007, p. 167). • portfolio—The “array of investments in projects and programs a company chooses to pursue” (Morgan, Levitt and Malek 2007, 3). • program—“A cluster of interconnected projects” (Morgan, Levitt and Malek 2007, 9). • project—The “temporary initiatives that companies put into place alongside their ongoing operations to achieve specific goals. They are clearly defined packages of work, bound by deadlines and endowed with resources including budgets, people, and facilities” (Morgan, Levitt and Malek 2007, 3). • project champion—A project team member who serves as the project’s chief advocate, especially during the early days of planning. The project champion often becomes the project manager, but not always. • satisfice—A term devised by Nobel Prize winning economist Herbert Simon (by combining “satisfy” and “suffice”) to describe a realistic form of decision-making, in which people accept “the ‘good-enough’ solution rather than searching indefinitely for the best solution” (Little 2011). • set-based concurrent engineering—An approach to project selection that relies on not filtering projects too quickly, but rather developing multiple solutions through to final selection just before launch. This approach is expensive and resource-hungry, but it is argued that the costs of delay by narrowing to a single solution too soon—which subsequently turns out not to be viable (or sub-optimal)—is greater than the resources expended on multiple, parallel developments. • strategy—According to Merriam-Webster, “a careful plan or method for achieving a particular goal usually over a long period of time.” • sunk cost fallacy—The tendency “to continue investing in a losing proposition because of what it’s already cost” (Warrell 2015). ~References Anthony, S. (2009). Major League Innovation. Harvard Business Review. Retrieved from https://hbr.org/2009/10/major-league-innovation Ballé, M. (2016, February 2). Can lean support strategy as much as it does operations? Planet Lean. Retrieved from http://planet-lean.com/can-lean-supp...oes-operations Ballé, M., Jones, D., Chaize, J., & Fiume, O. (2017). The Lean Strategy: Using Lean to Create Competitive Advantage, Unleash Innovation, and Deliver Sustainable Growth. New York: McGraw-Hill Education. BusinessDictionary.com. n.d. “Operational effectiveness.” BusinessDictionary. Accessed July 29, 2018. http://www.businessdictionary.com/de...ctiveness.html Cooper, D. G., Edgett, D. J., & Kleinschmidt, D. J. (2000). New Problems, New Solutions: Making Portfolio Management More Effective. Research Technology Management, 43(2). Retrieved from http://www.stage-gate.net/downloads/...pers/wp_09.pdf Dean, Tamara. 2013. Network+ Guide to Networks, Sixth Edition. Boston: Course Technology. El-Erian, M. A. (2014, June 13). The Secret to Southwest’s Success. Bloomberg View. Retrieved from http://www.bloombergview.com/article...west-s-success Foley, R. J. (2006, July 6). \$26 Million Software Scrapped by UW System: Officials Regret the Loss But Believe A Different Vendor Will Prove Better In the Long Run. Wisconsin State Journal. Retrieved from http://host.madison.com/news/local/m...27e23bf65.html Hammonds, K. H. (2001, February 28). Michael Porter’s Big Ideas. Fast Company. Retrieved from https://www.fastcompany.com/42485/mi...ters-big-ideas Investopedia. (n.d.). Investopedia. Retrieved August 12, 2016, from http://www.investopedia.com Kerzner, H., & Kerzner, H. R. (2013). Project Management: A Systems Approach to Planning, Scheduling, and Controlling. Hoboken: John Wiley & Sons. Little, D. (2011, January 30). Herbert Simon’s Satisficing Life. Retrieved from Understanding Society: Innovative Thinking about a Global World: http://understandingsociety.blogspot...cing-life.html Morgan, M., Levitt, R. E., & Malek, W. A. (2007). Executing Your Strategy: How to Break It Down and Get It Done. Boston: Harvard Business School Publishing. Porter, M. E. (1996). What is Strategy? Harvard Business Review(November-December). Retrieved from https://hbr.org/1996/11/what-is-strategy Porter, M. E. (2001, November 12). Manager’s Journal: How to Profit From a Downturn. Wall Street Journal, Eastern edition. Scheck, J. (2016, January 14). Oil Route Forces Companies to Delay Decisions on \$380 Billion in Projects. Wall Street Journal. Retrieved from http://www.wsj.com/articles/oil-rout...cts-1452775590 Seeber, J. (2011). Project Selection Criteria: How to Play it Right. Retrieved from International Project Management Association: http://www.ipma-usa.org/articles/Sel...onCriteria.pdf SmartSheet. (n.d.). What Is New Product Development? Retrieved July 25, 2018, from SmartSheet: https://www.smartsheet.com/all-about...opment-process Tversky, A., & Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and Biases. Science, 185(4157), 1124-1131. Warrell, M. (2015, September 15). Sunk-Cost Bias: Is It Time To Call It Quits? Forbes. Retrieved from http://www.forbes.com/sites/margiewa.../#129648a860ee 1. Email message to Jeffrey Russell, January 17, 2018.
textbooks/biz/Management/Book%3A_Technical_Project_Management_in_Living_and_Geometric_Order_(Russell_Pferdehirt_and_Nelson)/1.02%3A_Strategy_and_Portfolio_Management.txt
Planning without action is futile. Action without planning is fatal. —Anonymous Objectives After reading this chapter, you will be able to • Define basic terms related to project initiation and explain how the initiation phase fits into a project’s overall life cycle • Discuss the importance of defining “success” for a project • Describe the elements of a project charter and explain its role in the initiation phase • Explain issues related to project scope • Distinguish between adaptive and technical challenges • Explain the importance of understanding a project’s context, and the potential for that context to change as you begin the initiation process The Big Ideas In This Lesson • To successfully initiate a project, you need to look into the future, through the project’s entire life cycle and anticipate the many issues you might have to deal with. Only then can you clearly define what success means for your project. It’s essential to avoid a purely geometric approach to initiation, that presumes you will simply respond to changing events as they occur, rather than attempting to anticipate them. • Of the three constraints on project management—scope, budget, and schedule—scope is the most difficult to pin down. Describing it clearly and in detail takes a lot of effort. During initiation, you need to define the project’s scope as clearly as possible, and then refine it as the project unfolds and you learn more about the project and the customer’s needs. • The potential for changing contexts mean that no two projects are the same. Even if you think you’ve completed an identical project recently, you’ll almost certainly find that externalities and differences in context will force you to alter your approach in some way or another. 3.1 Initiation and the Project Life Cycle Physics tells us that light is both particle and wave. Project management has a similarly dual nature; it is both a series of distinct phases with a clear beginning and end, and a continuous, circular process in which each ending leads to a new beginning. Throughout a project, a successful project manager strives to anticipate changing conditions , rather than simply responding to them as they arise. Let’s start with the more traditional view, which describes project management as a series of sequential phases, with project initiation coming right after project selection. You can think of these phases, shown in Figure 3-1, as the particle nature of project management. Figure 3-1: Traditional view of project management But while project initiation marks the official beginning of a project, doing it well also requires looking past the making stage to the entire life cycle of the project’s end result. You can think of this as the wave nature of project management. As illustrated in Figure 3-2, the making stage, in which a project is initiated and executed, is one part of the larger cycle that includes the operating/using/changing stage, in which the customer makes use of the project; and the demolishing stage, when the project is retired so it can be replaced by something new and better. Figure 3-2: To successfully initiate a project, you need to envision the entire life cycle of the project’s result (Source: John Nelson) Taking this holistic, life-cycle view will encourage you to ask better questions about what “success” really means for your project. For example, as sustainability becomes an ever-present engineering concern, project managers often need to factor in long-term environmental effects when judging a project’s success. This entails the use of tools like life cycle assessments (LCA) for evaluating the “potential environmental impacts of a product, material, process, or activity” and for “assessing a range of environmental impacts across the full life cycle of a product system, from materials acquisition to manufacturing, use, and final disposition” (United States Environmental Protection Agency n.d.). An LCA analysis early in the initiation phase can help to broaden your view of the potential effects of a project and to increase the range of options you consider as you set the project in motion. In the construction industry, LCAs often focus on energy and water use of a building’s life cycle. In product development, LCAs are used to assess the impacts of raw materials processing, production, packaging, and recycling, among other things. For an interesting example of an apparel industry analysis, see the following: “The Life Cycle of a Jean.” An LCA is just one of many ways to kick-start the knowledge acquisition process that unfolds throughout a project. It’s not unusual to know little to nothing about a project at the start. By the time you finish, you know everything you wished you knew at the beginning, and you have acquired knowledge that you can carry forward to new projects. Anything you learn about a project is important, but the information you compile during initiation sets you up to respond to the living order uncertainty that will inevitably arise as the project unfolds. It can encourage you to look past the initiation phase to the project’s entire life cycle, and then to circle back using your new knowledge to take a more holistic approach to project initiation. One of the best ways to learn about a project is to talk to everyone involved: • Engage with the customer to learn all you can about what they want out of the project over the long term. In other words, find out how the customer defines the project’s value. Be prepared to ask lots of questions. In some situations, it might be helpful to watch the customer use a product to get a better idea of unmet needs. Keep in mind that customers don’t always know exactly what they want, and it may not have occurred to them that they can shape their thinking around the project’s life cycle. They might need the help of an informed, experienced, sensitive project manager to formulate their goals. • Think broadly about who the customer is and include the needs of the end user—the ultimate customer—in your thinking. For example, if you are building a new clinic, don’t confine yourself to the executives of the HMO paying for the building. Take time to talk to the people who will really be using the building—doctors, nurses, technicians, administrative staff, maintenance workers, and patients. • Talk to stakeholders—the people who will be affected by or who can affect the project—and ask about their concerns and needs. Make sure you understand their basic assumptions. • As when identifying customers, think broadly about who the stakeholders are. The customer and end users are clearly stakeholders, as is the manager sponsoring the project, and the project team members. But don’t forget about vendors, resource owners, government officials and regulatory bodies, and members of other departments in your organization. (Jordan 2012) Making these conversations and analyses of needs a priority will give you a broader view of your project’s overall life cycle. Though of course, in the day-to-day running of a project, you can’t spend every minute looking ahead, you do have to pay attention to the traditional phases of project management, focusing on details like schedules and personnel. Even so, as you complete the tasks related to one phase, you often need to be thinking ahead to tasks related to a subsequent phase. Significant overlap between the various phases is common, as shown in Figure 3-3. You will often need to look back at and revise the information you compiled during the initiation phase as you learn more about the project. Figure 3-3: Even in the traditional view of project management, the phases of a project often overlap Remember, a project is a learning acquisition activity. In most cases, what you know during project initiation is only a small fraction of what you will know when the project is finished. You have to be prepared to adapt as you learn more about your project. 3.2 The Work of Initiation During initiation you will typically create the first draft of the following items, which take a high-level view of the project: • project charter: A “single, consolidated source of information” (Richter 2014) for project initiation and planning. It describes your current knowledge about the project and includes information such as the names of all stakeholders, a statement of your organization’s needs, the history leading up to the project, the project’s purpose, deliverables, and roles and responsibilities. A project charter is also sometimes called a project overview statement. It may be helpful to think of the project charter as a contract between the project team and the project sponsors. • scope statement: A document that defines the project’s scope. Defining scope, which is really the heart of the initiation phase, is discussed in detail in the next section. • business case: An “argument, usually documented, that is intended to convince a decision maker to approve some kind of action. As a rule, a business case has to articulate a clear path to an attractive return on investment (ROI). At its simplest, a business case could be a spoken suggestion…. For more complex issues, a business case should be presented in a carefully constructed document. A business case document should examine benefits and risks involved with both taking the action and, conversely, not taking the action. The conclusion should be a compelling argument for implementation” (TechTarget n.d.). A business case addresses these fundamental questions: 1) Why this project? 2) Why this project over another project? and 3) Why this project now? Both the project charter and the scope statement typically evolve as the project unfolds and you learn more about the project details in the planning phase. This means that as you work through the initiation phase, you should always be thinking ahead to the following elements of the planning phase: • work breakdown structure (WBS): A description of the tasks associated with project deliverables, often in the form of a tree diagram. A work breakdown structure “displays the relationship of each task to the other tasks, to the whole and the end product (goal or objective). It shows the allocation of responsibility and identifies resources required and time available at each stage for project monitoring and management” (Business Dictionary n.d.). You can download an Excel file with a template for a work breakdown structure here: Work Breakdown Structure. • organizational breakdown structure (OBS): A description of the project team. It explains “who reports to whom, the details of the hierarchy, and the reporting structure…. Organizational breakdown structures are normally communicated visually through the use of graphs or charts. A project or general manager is listed and underneath the PM several divisions might be created, such as product development, design, materials management and production” (Bradley n.d.). See also responsibility assignment matrix (RAM) below. • work package: A “group of related tasks within a project. Because they look like projects themselves, they are often thought of as sub-projects within a larger project. Work packages are the smallest unit of work that a project can be broken down to when creating your Work Breakdown Structure (WBS)” (Wrike n.d.). • responsibility assignment matrix (RAM): A type of organizational breakdown structure in the form of a grid that typically lists project tasks in the first column and stakeholders across the top row, with tasks assigned to the various stakeholders. You can use it to determine if you have enough resources for a project, and to record who is responsible for what. RAMs come in several forms, but one of the most useful is a responsible, accountable, consult, and inform (RACI) chart, which designates each stakeholder’s relationship to each task, using the following categories: responsible (actually does the work), accountable (has final authority over the activity), consulted (available to provide information about the activity), or informed (is informed after the activity is completed, often because his or her own work depends on it) (Doglione 2018). You can download a template for a RACI matrix here: “Responsibility Assignment Matrix.” For a brief introduction to RACI charts, see this web page: “RACI Charts.” (A RACI chart is sometimes also referred to as a linear responsibility chart.) Avoid the Mediocrity of Idea Averaging As you embark on the systematic learning that is the hallmark of the initiation phase, you’ll come across many ideas about the best way to achieve success. Some may be truly innovative, while others are slight variations on the same old thing. If innovation is your goal, then take care not to fall prey to idea averaging—taking a little from one idea, and a little from another, and a little from another—without fully committing to any. According to Andrew Hill, in a blog post on the topic, one way to avoid idea averaging “is to create a strong culture of feedback. Giving team members settings where they can point out flaws in current projects will help shift their mind into critical thinking mode. Feedback also gives you a tool to help measure, detect, or predict the failure of a project. In this way, the ideas you choose to act on are never set in stone, they are constantly being re-evaluated and rethought” (Hill 2016). You can read the complete blog post here: “How to Avoid Idea Averaging.” 3.3 Defining Success Experienced project managers know that you need to start fast by defining what “success” means for your project and determining how to measure it. To accomplish this, you need talk with the individuals and organizations who will determine whether the project is a success. This may include internal or external clients, individuals or groups with approval authority, or groups of potential customers. Many projects flounder when the engineers responsible say, “We met our objective. We followed the plan as written” but the customer says, “You didn’t provide what I wanted.” Countless products have been released and subsequently discontinued because the specifications did not match the customers’ needs. One example is the 2013 release of Facebook Home, a user interface for the Android phone that turned Facebook into the user’s home screen, removing the features, such as docks and app folders, that Android users love. How could the company make such a huge mistake? According to Business Insider, the Facebook Home development team, composed primarily of iPhone users, “was unfamiliar with the features that a normal Android user might get used to, and might not want to lose when they installed Facebook Home” (Carlson 2013). This failure to learn about the customers’ needs had disastrous results. The price of the HTC First, the phone on which Facebook Home came preinstalled, dropped from \$99 to 99 cents within a few weeks (Tate 2013). Ultimately, Time magazine named the HTC First one of the “lamest moments in tech” for 2013 (McCracken 2013). In the medical device industry, a successfully developed product might eventually be considered a failure if the development team underestimates the hurdles to achieving FDA certification. And we can look to self-driving cars for an example of how success extends beyond the narrower scope of product completion. Self-driving cars exist, but they have to be able to successfully interact with unpredictable human drivers and they need to have governmental approval in order to become a successful project. In capital projects, the total cost of ownership (the total of direct and indirect costs related to the construction and use of a building) is crucial to determining whether or not a building is a success. For example, if a building designed for use only during the Olympics ends up being used for years afterwards, the building’s maintenance costs will probably grow exponentially, transforming a supposedly successful building into a failure over the long term from the point of view of the host city. The key is realistically projecting the building’s total design life. The cost of maintenance also plays a part in the question of whether construction funded by donors can be considered a success. In such cases, success is often defined as the facility’s “grand opening” when it should really be defined as a fully funded operational infrastructure. Successful project managers are typically very specific when they define project success. By contrast, new project managers make the mistake of being too general. However, being specific doesn’t necessarily mean being longwinded. By focusing on the end user’s needs rather than on generating an exhaustive catalogue of physical requirements, you will provide a concise, useful definition of “success” for your project. By taking this approach, Lee Evey, the manager of the Pentagon rebuilding project after the 9/11 attack, was able to consolidate thousands of pages of specifications into “16 performance-based requirements. For example, his energy-efficiency requirement was that the building not use more than a specific number of BTUs [British Thermal Units] to heat and cool the building per year. It was then up to the design-build teams to meet the requirement within the budget” (Rife 2005). Success in Lean and Agile Traditional project managers tend to define success in terms of completing a project on time and within budget. But Lean presumes a more expansive definition of success—one that prioritizes eliminating waste and maximizing value, and in the process building customer loyalty that will extend to as-yet-unforeseen projects. The relentless focus on eliminating waste in the value stream has the corollary effect of keeping projects on schedule and within budget. It also tends to improve the quality of the final product, adding value that will actually benefit the customer. To learn more, see this thorough explanation of the history and usefulness of Lean in project management: “The Origins of Lean Project Management.” In Agile development, a team agrees on its definition of intermediate success in the form of a sprint goal at every planning meeting. Success for the entire project is not measured in terms of being on time and on budget. Instead, in the spirit of the Agile manifesto, success means delivering “working software frequently”—software that the customer can actually use (Beedle et al. n.d.). Ultimately, success in Agile means delivering as much working software as the schedule and budget will allow. Agile coach Angela Johnson explains her vision of Agile success in this interesting blog post: “Defining Success Metrics for an Agile Project Methodology.” 3.4 Creating the Project Charter Developing the project charter is one of the most important parts of project initiation. By including all key stakeholders in the process of creating it, you will help ensure agreement on what constitutes project success, relevant constraints (e.g., time and budget), and the definition of scope. The exact form of a project charter will vary from one organization to another. At some companies, the project charter is a spreadsheet file; in others, a document file. You’ll find many templates for project charters available on the web. According to Managing Projects Large and Small: The Fundamental Skills for Delivering on Budget and on Time, a typical project charter contains some or all of the following: • Name of project’s sponsor • Relationship between the project’s goals and higher organizational goals • Benefits of the project to the organization • Expected time frame of the work • Concise description of project deliverables (objectives) • Budget, allocations, and resources available to the project team • Project manager’s authority • Sponsor’s signature (Harvard Business School Publishing Corporation 2006, 2-3) Above all else, a project charter should be clear and specific about the project’s goals—that is, about the definition of success. The goals should be measurable, so there is no confusion about whether or not the project is a success: Ambiguity on the goals can lead to misunderstandings, disappointment, and expensive rework. Consider this example of a broad-brush objective: “Develop a Web site that’s capable of providing fast, accurate, cost-effective product information and fulfillment to our customers.” That is how a sponsor might describe the project’s objective in the charter. But what exactly does it mean? What is “fast”? How should accuracy be defined? Is one error in 1,000 transactions acceptable, or would one error in 10,000 meet the sponsor’s expectations? To what degree must the site be cost effective? Each of those questions should be answered in consultation with the sponsor and key stakeholders. (Harvard Business School Publishing Corporation 2006, 4-5) But while you want to be specific about the project goals, take care not to dwell on the precise details regarding how you will achieve those goals: A thoughtful charter indicates the ends but does not specify the means. The means should be left to the project manager, team leader, and members. Doing otherwise—that is, telling the team what it should do and how to do it—would undermine any benefit derived from having recruited a competent team. (Harvard Business School Publishing Corporation 2006, 5) Scope in Agile Robert Merrill, a Senior Business Analyst at the University of Wisconsin-Madison, and an Agile coach, advises taking a three-part approach to scope on Agile projects, determining the following: 1. Minimum viable features—If we can’t deliver this much within schedule and budget constraints, the project should be cancelled. 2. Features we can’t think about now—Although these might be features the client wants, they are not something we can create, and so we can’t waste time and mental energy thinking about them. 3. Everything else—This is our unpredictability buffer, which we maintain to protect schedule and budget. Note that these categories are not frozen; they can be changed during each iteration planning cycle. Scope in an Agile project is variable, but carefully and visibly managed. 3.5 Managing Project Scope Time, cost, and scope are known as the triple constraints of project management. It’s not possible to change one without changing at least one of the others. If the project takes twice as long as expected to complete, then the cost will almost certainly go up. On the other hand, a decision to cut costs, perhaps by using less experienced labor, could lead to a work slowdown, extending the schedule. Such a decision might also result in a change to the project’s scope, perhaps in the form of a lower quality product. The initiation phase is too early in the project to nail down precise details about time and cost, but it is a good time to think long and hard about scope, which is “all of the work that needs to be done to provide the product or service your project is delivering” (Martinez n.d.). In this early stage, you and the project stakeholders might do some blue sky thinking about what your project could possibly achieve, without regard to the constraints of time, cost, and scope. But before too long you’ll need to zero in on a definition of the project’s scope, formalizing it as a scope statement, using the information currently available to you. Except for the simplest projects, any scope definition will almost certainly evolve as you learn more about the project and the customer’s needs. The term scope evolution refers to changes that all stakeholders agree on, and that are accompanied by corresponding changes in budget and schedule. Scope evolution is a natural result of the kind of learning that goes on as a project unfolds—for example, learning that arises from fresh insights into the needs of the end user, new regulations, or upheaval in the marketplace. As long as all stakeholders agree on the scope changes (and the associated changes to the budget and schedule), scope evolution ensures that customers actually get what they want out of the project. The more you talk with the client and learn about their needs, the more you will be able to refine the scope. Indeed, one of the main jobs of a project manager is managing scope evolution. But different types of projects will involve varying amounts of scope evolution. For example, if you’re working on a project related to satisfying a specific environmental regulation, the initial definition of the project’s scope might be clear, requiring little refinement as the project unfolds, as long as the regulation itself is not altered. But if you are working on a product designed to satisfy a brand-new market demand, you might need to refine the scope continually to ensure that you satisfy your customers’ needs. Perhaps the most common cause of scope evolution is a change in the context in which a project is planned and executed. Alterations in market forces, changing demographics, new or more vigorous competition, and technological advancements can all change a project’s context, forcing you to rethink its scope. This potential for changing contexts means that no two projects are the same. You might think Project B is nearly identical to Project A, but then a sudden shift in context can change everything. As shown in Figure 3-4, context is largely defined by the organizational, social, and political structures in which a project occurs. Figure 3-4: Context is largely defined by the organizational, social, and political structures in which a project occurs While you need to stay open to the possibility of scope evolution, it’s essential to resist scope creep, an uncontrolled cascade of changes to the scope with no corresponding authorized changes in budget and schedule. The difference between the two is the difference between managed and unmanaged change: Success ≠ No Changes to Project Scope In your efforts to prevent scope creep, take care that you don’t make the mistake of equating project success with completing the project exactly as originally specified in the scope statement during the initiation phase. In the ever-changing currents of the living order, scope evolution is often necessary and desirable. As project stakeholders learn new information about the project, they will naturally make suggestions about ways to alter the original plan. But never fear—if they have a clear understanding of the definition of project success, they will be able to distinguish between scope evolution and scope creep. So as the project manager, you want to make sure everyone does in fact understand the meaning of “success.” • Scope evolution is managed change. It is an approved alteration to the project scope that occurs as the project participants learn more about the project. It results in an official change in the project scope, and therefore to the project budget or schedule, as agreed to by all project participants. This kind of managed change is a natural and rational result of the kind of learning that goes on throughout the course of a project. It is a conscious choice necessitated by new information forcing you to reconsider project essentials in order to achieve the intended project value. • Scope creep is unmanaged change. It is caused by uncontrolled changes to the project scope. Such changes might add value from the customer’s perspective, but the time, money, and resources consumed by the change of scope lead to additional overruns. Scope creep tends to happen bit by bit because no one is paying close attention to the project’s scope. For example, in a kitchen remodeling project intended to replace countertops and cabinets, deciding at the last minute to replace all appliances might be an example of scope creep. Creating a Clear Scope Statement The key to managing scope is a carefully crafted scope statement, which should be clear and precise. The details of how you plan to carry out a project may be vague at first, but want you want to achieve should be perfectly clear. Vagueness can lead to small changes to the project’s scope, which in turn lead to other changes, and so on, until the original project is no longer recognizable. Writing a scope statement, the document that defines the project’s scope, is a major part of the initiation phase. However, according to Brad Bigelow in an article for the Project Management Institute, it is “usually expressed in qualitative terms that leave room for interpretation and misunderstanding. Consequently, it’s often the biggest source of conflicts in a project” (2012, 1). To avoid such problems, experienced project managers put a lot of effort into learning what should and shouldn’t be included in the project, and then articulating these boundaries as clearly as possible in the form of a scope statement. According to Bigelow, this work is essential to ensuring a project’s success: “No project’s scope can ever be entirely free of fuzziness—free from subjectivity and imperfect definitions—as long as human beings are involved. On the other hand, it’s also highly improbable that any project will ever survive initiation if its scope is entirely vague, undefined, and subject to unpredictable expectations” (2). If the scope is poorly defined, then what is or isn’t within the project scope is reduced to a matter of perspective. Not surprisingly, these “different perspectives…can often be the root of conflicts within a project” (2). Bigelow describes a project in which the team and the customer see things very differently: A project team may, for example, propose to prepare three prototypes to refine the customer’s requirements and reduce production risks. The customer may reject this proposal as out of scope…. Because the prototypes are expendable and will not be considered finished products, the customer may refuse to consider them as deliverables. And if he perceives that prototyping delays final production and consumes resources that could be better used, he may reject the activity as outside the acceptable extent of project work. (2) When the scope is poorly defined, satisfying the customer can grow increasingly difficult, with the team going off and creating what it thinks the customer wants, only to be told, “No, that’s not it.” Opinions vary on exactly what a scope statement should include, but at the very least it should contain the following: • A brief justification of the project’s purpose, including a summary of the business needs the project will address. • An explanation of the project’s goals. • Acceptance criteria that specify the conditions the product or service must satisfy before the customer will accept the deliverables. • Deliverables, which are “the quantifiable goods or services that will be provided upon the completion of a project. Deliverables can be tangible or intangible parts of the development process, and they are often specified functions or characteristics of the project” (Investopedia n.d.). • An explanation of anything excluded from the project—in other words, an explanation of what is out of scope for the project. This list should be “as detailed as is necessary to define the project boundaries to all stakeholders” (Feldsher 2016). • Constraints, such as budget and schedule. • Assumptions, including anything you currently believe to be true about the project. It’s also helpful to include ideas “about how you will address uncertain information as you conceive, plan, and perform your project” (Portny n.d.). • An explanation of any new or unusual technology you plan to use throughout the project. This is not a typical part of a scope statement, but “it’s likely that stakeholders will appreciate the transparency and feel more comfortable with the project moving forward” (Feldsher 2016). Some Practical Ideas for Working with Scope A successful project manager is skilled at guiding customers, who simply may not know what they want until they see it. For truly innovative products, customers may not even be able to define what they want. An adage attributed to Henry Ford sums this up neatly: “If I had asked people what they wanted, they would have said faster horses.” The Sony Walkman was not created to satisfy any identified consumer demand for portable music, but in response to a request from Sony Co-founder Masaru Ibuka for a convenient way to listen to opera. A Sony designer got to work on the special request, and the result was one of Sony’s most successful products of all time (Franzen 2014). The Agile Perspective on Scope Creep Agile welcomes changes to product requirements even late in the development process. Indeed, the founders of Agile made an openness to late-breaking changes one of their “Principles behind the Agile Manifesto.” which you can read here: “Principles Behind the Agile Manifesto.” In this environment of constant iterations and revisions, Agile developers have a different perspective on scope creep. A blog post for OptiSol spells out some ways to identify what is and isn’t scope creep in Agile. Making changes “before the team has started to think about the details” would not be considered scope creep in Agile, nor would replacing one feature with another, as long as the new feature doesn’t add new work for the team. However, swapping a new feature for a feature that is already complete is definitely a form of scope creep, because it creates new work. The same is true of replacing a small feature with something more complex (OptiSol n.d.). You can read the complete blog post here: “What is Scope Creep in Agile Development? When developers at Facebook introduced Facebook Home, they thought they were guiding their customers to a new way of using their mobile phones, just as Sony guided their customers to a new way of listening to music. But because the Facebook developers knew so little about the needs of their Android-using customers, they ended up creating a useless product. The moral of the story: before you attempt to guide your customers, make sure you understand their needs. Here are a few other tips to keep in mind when thinking about scope: • Engineers tend to focus too much on what they know, with little regard to what they don’t know. Take some time to think about what you know you don’t know. Then try to imagine how you would deal with the possible risks those unknowns might entail. • Engineers tend to be highly detailed people. This can be a problem during project initiation if it compels you to map out every single detail of the project with no regard for the big picture. Of course, the details are important, but you also need to take a high-level view at the beginning. Not all details are of equal importance and the details that are important may vary over time. • Engineers tend to focus on doing rather than thinking. They like to jump right in and starting executing a project. But remember that project initiation is your time to do some thinking first. Scope definition, in particular, is a thinking process in which you try to conceptualize what you don’t know. • Not all project requirements are equal. They can range from “absolutely must have,” to “would like to have.” When discussing requirements with the customer, make sure you understand where each requirement fits on this scale. • Ask the customer as many different questions as possible about the project. “By probing the customer’s requirements and expectations from as many angles as possible, a project team can significantly reduce the number of uncertain elements of project scope and reduce the potential variability of these elements. It does not guarantee that conflicts over project scope will not occur, but it can help isolate the potential sources of these conflicts” (Bigelow 2012, 4). • The best project managers understand the importance of learning all they can about their clients’ definition of “value” and “success,” and then suggest ways to achieve those goals that go beyond what their clients might be able to envision. Such project managers focus on performance requirements and options to achieve them, and avoid locking into one approach too quickly. • As the project progresses past initiation and into planning and execution, remember to review the project’s scope definition regularly to ensure that it is still appropriate. As the project moves forward and stakeholders learn more about it, scope changes are typically inevitable. “Indeed, the failure of a project to accommodate a change in scope can have far more serious consequences for the organization as a whole than if the change had been accepted—even if the change increased the project’s budget and extended its schedule. The ability of a project to adapt to such changes can make a crucial difference in its ultimate value to the organization. After all, the project’s objectives are subordinate to those of the organization—not vice versa. Therefore, it is crucial for the project team to understand at the very start of a project: which is more important? Avoiding change or managing it?” (Bigelow 2012, 6). • One risk is specifying a product that has all the best features of every competitor on the market—for example, designing an industrial motor with the smallest possible footprint, highest efficiency, lowest cost, highest torque, and every accessory available at launch. Merely attempting to surpass the competition in specs prevents a team from looking for a breakthrough solution. • Teams that successfully define project scope typically start by spending time watching customers use the relevant products or services. 3.6 From the Trenches: Michael Mucha on Sustainability and Adaptive Challenges Michael Mucha is Chief Engineer and Director for the Madison Metropolitan Sewerage District, serves as the current Chair for ASCE’s Committee on Sustainability, and also serves on the Sustain Dane Board of Directors in Madison, Wisconsin. He explains that a project’s scope is determined by the kind of problem you’re trying to solve. Is it technical—with a clear-cut solution that engineers are traditionally trained to provide? Or is it adaptive—with no definite consensus on how to proceed, with every solution guaranteed to challenge stakeholders’ values and beliefs? Or is it a mix of both? Sustainable engineering solutions often involve adaptive challenges. As an example, he describes a recent project: We needed to upgrade a waste water pumping station between the Madison’s Marshall Park boat ramp and a busy bike path. Building the station itself was a technical problem. If we were working in a total vacuum, we could have built it a certain size and capacity and been done with it. But to build this pumping station in such a busy area, one that people had strong feelings about, we had to take an adaptive approach. This meant focusing on providing social benefits, such as public restrooms, two aquatic invasive species boat wash hydrants, and a bike repair station. But we also worked to educate the public about the larger importance of waste water treatment. For example, one simple way to get someone’s attention is to explain that, when you flush the toilet, the water travels to the Gulf of Mexico in 40 days. Once you know that, you might be inclined to see a pumping station as part of a larger story—a way to help protect the global environment. In other words, the problem shifted from a technical to an adaptive challenge. Building a pumping station is very straight forward. You could spell out all the steps in a manual. That’s the technical part. But there is no manual for solving an adaptive problem. It involves changing people’s belief and values. In the case of the pumping station, we wanted to change people’s ideas about how they think about waste water, so they would see the work on the station as part of something larger. (Mucha 2017) The distinction between adaptive and technical problems was first spelled out by Ronald A. Heifetz in his 1998 book Leadership Without Answers. For a hands-on, practical introduction to the topic, Mucha recommends The Practice of Adaptive Leadership (Heifetz, Linsky and Grashow 2009). 3.7 Project Context The Realities of Externalities One term closely related to context is externality. It refers to a “consequence of an economic activity that is experienced by unrelated third parties” (Investopedia n.d.). An externality can involve “a loss or gain in the welfare of one party resulting from an activity of another party, without there being any compensation for the losing party” (Business Dictionary n.d.). For example, a sudden rise in oil prices could be a devastating externality in a project that depends on a steady and economical fuel supply. Some externalities are positive—for example, Ireland’s decision to make public college education essentially free for all citizens made an already highly educated workforce even more attractive to pharmaceutical and software companies, which increased their investment in the country (Friedman 2005). You and your project team have no control over externalities. But your job, as a project manager, is to be on the lookout for them at every turn, and to respond quickly and decisively when they do. According to Merriam-Webster, the term context refers to “the situation in which something happens: the group of conditions that exist where and when something happens.” All projects occur within multiple contexts—within an organizational context (both yours and the customer’s), a market context, a technical context, and a social context. All of these can change over the life of a project, and in the permanent whitewater of the modern business world, they probably will. Good project managers pay attention to changing context. They realize that, as contexts change, the project will probably need to be adjusted. Completing the project in accordance with the original objectives could end up being a terrible outcome, if it turns out that the original objectives no longer fit the context of the organization. The potential for changing contexts means that no two projects are the same. Even if you think you’ve completed an identical project recently, you’ll almost certainly find that differences in context will force you to alter your approach in some way or another. For example, the fact that you successfully built a hospital in Detroit can’t completely prepare you for the experience of building a hospital in San Francisco, where the area’s volatile seismic activity means you need to consider a host of issues related to earthquake-resistance. In product development, you might find that the customer did not fully understand their needs at the outset. As you begin to learn what the customer wants, you might see the project in a much broader, more complicated context. Likewise, the introduction of new technology can increase the complexity of a project in ways you couldn’t foresee during initiation. To deal with these changes, you need to be able to rely on a flexible project team that can adapt as the project unfolds. An article by James Kanter in the New York Times describes the construction of two European nuclear power plants that were supposed to be “clones” of each other, with both built according to rigid standards specifying every aspect of the projects down to “the carpeting and wallpaper.” The similarity of the projects was supposed to lead to clear sailing for both, but a host of unforeseen technical problems resulted in major delays and cost overruns. This is a perfect example of how contexts—one reactor was in Finland, the other in France—can dramatically affect the outcomes of supposedly identical projects. Problems at the Finnish site included a foundation that was too porous and therefore likely to corrode, inexperienced subcontractors drilling holes in the wrong places, and communication problems arising from a workforce composed of people speaking eight different languages. At the supposedly identical French site, a different array of problems included cracks in the concrete base, incorrectly positioned steel reinforcements, and unqualified welders. According to UniStar Nuclear Energy, the company behind the Finnish and French projects, a fleet of similar reactors are in the works around the world. Who knows what risks will arise on those projects. After all, France and Finland are at least stable, geologically speaking. But as Kanter points out, “Earthquake risks in places like China and the United States or even the threat of storm surges means building these reactors will be even trickier elsewhere” (2009). Context is especially important in product development, where the backdrop for a new product can change overnight. In a paper arguing for a more flexible approach to product development, M. Meißner and L. Blessing discuss the many ways context influences the product development process: Designers are influenced by the society in which they live, and their decisions depend on political, social, and financial pressures. The technological environment and the accelerating rate of change is a characteristic of modern times. Changing conditions produce new needs and thereby encourage new developments, innovation is rewarded, and new artifacts are created. Some products require design activity on a far larger scale than others. Huge one-off products such as power plants or oil platforms require an immense and skillfully organized design operation. Less complex products such as hand tools or toys can be designed by a single person…. The designer could be working in a small company, carrying a variety of responsibilities including the marketing, design, and manufacturing of the product. Or he could be working in a larger company where many people work on a single design project with specified areas of activity and a hierarchy of responsibilities. (70) In changing contexts, flexibility is key. In his studies of successful project managers, Alexander Laufer found that the best project managers deviate from the common “one best way” approach and adjust their practices to the specific context of their project. Avoiding the “one best way” approach does not imply, however, that there are no “wrong ways,” that “anything goes,” or that you must always “start from scratch.” There is always the need to strike a balance between relying on the accumulated knowledge of the organization, on the one hand, and enhancing the flexibility and creativity within each individual project on the other. (216) Laufer argues that modern project managers need to employ a modern, more flexible approach than their predecessors: The classical model of project management, in which standards are developed for virtually all situations, expects the project manager to serve primarily as a controller: to ensure that team members adhere to the established standard. This role entails only a minimal requirement for judgment and no requirement for adaptation. In reality, the project manager must constantly engage in making sense of the ambiguous and changing situation, and he must adjust the common practices to the unique situation. This process requires a great deal of interpretation and judgment based on rich experience. (218) In Lesson 5, we’ll talk about the value of building diverse teams that bring together people with complementary skills—ideally, people of varying ages and levels of experience. But how can new project managers, who lack that all-important “rich experience,” increase their overall understanding of their projects’ multiple contexts? Start by researching past projects with similar characteristics, consulting with mentors, and, generally, checking as many formal and informal sources regarding lessons learned from previous projects as you can find. It also helps to stay well-informed—about your organization, your customers, your industry, and the world in general. For instance, if you were working on a construction project in the healthcare field in the past decade, you would have experienced a pronounced change in context, away from a doctor-centered system to a patient-centered system that seeks to empower patients to define value on their terms (Porter and Lee 2013). If you were new to managing projects in that field, you would be wise to learn all you could about that shift. In the living order, such seismic changes are the norm, not the exception, in nearly all industries. ~Practical Tips • Engage all stakeholders: Your goal is to keep people meaningfully engaged in your project. You don’t want stakeholders showing up for ceremonial appearances at project meetings. Instead, you want them seriously focused on the prospects for project success. • Outcome clarity: Ask your customer to define success right at the beginning. Then, working with the customer and other stakeholders, define how success will be measured. • Use a common vocabulary: At the beginning of any project, go to your end-customers and learn their vocabulary. Make sure you understand the terms that are important to them and what such terms mean to them. Whenever possible, use your customer’s vocabulary, not yours. Also, strive to speak in plain English whenever you can, and avoid techno speak. • Create a glossary of terms: On projects with a lot of complex jargon, consider creating a glossary of terms. Then publish it in a way that makes it accessible to all stakeholders, updating it as needed. Here’s an example of one such glossary: “COSO Framework. • Identify what you don’t know: When you start a project, there are always things you don’t know. The key is to know that you don’t know them. The more you strive to recognize this, the better you will be at predicting those unknowns and making provisions for them. • Have key team members sign major project documents: Research shows that the act of signing a document makes people much more committed to delivering on the promises described in the document. Consider asking the entire project team to sign the project charter and scope documents. This simple act can serve as a powerful inducement to completing the project successfully. • Proactive concurrency: In the early stages, avoid the trap of plotting one thing after another, in a linear fashion. Instead, start fast, doing as many things as you can concurrently, as quickly as you can. This will give you a sense of whether or not the scope, budget, resources, and schedule are all in relatively close alignment at the macro scale. If you find they are not, report that to management right away. • Permanent urgency: In the living order in which all modern projects unfold, permanent urgency is the new law of nature. In the traditional, geometric order form of project management, you could assume that you would have sufficient time and resources to do things in a linear, step-by-step manner. But in the modern world, that’s rarely the case. Get used to an element of urgency in all projects. Try not to let this paralyze you and your team. Instead, let a sense of urgency spur you on to more agile, alert, and flexible project management techniques. • Post the project documents prominently: Putting important documents front and center helps a team stay focused, especially if you have everyone sign them first. It also encourages the team to update them when necessary. • Plan for errors: You and your team will almost certain make mistakes, especially in the early stages of a project. So plan for that. Keep thinking ahead to what might go wrong, and how you could correct course. Make a habit of keeping back-up plans in your back pocket. • Define sign-off or acceptance criteria: One good way to get success defined is to start by drawing up sign-off criteria, or acceptance criteria as they are sometimes called. These are agreed-on deliverables for each key stage of the project that allows the stage to be considered complete. It’s common to link these criteria to payments. The value of these criteria being defined at the beginning is that they are usually very objective and can continually be referred back to, thus ensuring that all activities are aligned with final deliverables. Major disagreements on whether a project was a success usually come down to a failure to define acceptance criteria. Achieving agreement on this is essential, as it drives everything else (resources, time, budgets, etc.). • Be prepared for change: Don’t be fooled into thinking that, just because you have created all the documents associated with project initiation, you have everything nailed down. It’s often not possible to foresee the kinds of ongoing changes that arise in the living order. ~Summary • Project initiation is about laying the groundwork for the entire project. Although initiation marks the official beginning of a project, it involves looking into the future, envisioning the project’s entire life cycle, which includes the making stage, the operating/using/changing stage, and the retirement/reuse stage. Even in the more traditional way of looking at project management, the phases of project management usually overlap and often entail looking back at the documents compiled during the initiation phase. • These documents created during initiation typically provide a high-level view of the project. They include the project charter, the scope statement, and the business case. As you create these documents, you should be thinking ahead to creating the following items during the planning phase: work breakdown structure (WBS), organizational breakdown structure (OBS), work package, and the responsibility assignment matrix (RAM). • Experienced project managers know that you need to start fast by defining what “success” means for your project and determining how to measure it. Success means different things in different industries. For example, in capital projects, the total cost of ownership (the total of direct and indirect costs related to the construction and use of a building) is crucial to determining whether or not a building is a success. Be as specific as possible when defining success for your project, without going into needless detail. Traditional project managers tend to define success in terms of completing a project on time and within budget. But Lean presumes a more expansive definition of success—one that prioritizes eliminating waste and maximizing value, and in the process building customer loyalty that will extend to as yet unforeseen projects. In Agile, success means delivering working software after each sprint, and, ultimately, delivering as much working software as the schedule and budget will allow. • A well-defined project charter defines the project’s goals, which in turn dictate the overall organization, schedule, personnel, and, ultimately, the work that will be accomplished. • Of the three constraints on project management—scope, budget, and schedule—scope is the most difficult to pin down. Except for the simplest projects, any scope definition will almost certainly evolve as you learn more about the project and the customer’s needs. The term scope evolution refers to changes that all stakeholders agree on, and that are accompanied by corresponding changes in budget and schedule. Ultimately, the definition of scope is based on what the customer wants, but sometimes you’ll need to guide the customer toward a definition of the project’s scope because the customer might not know what is possible. Take the time to articulate the scope carefully in the form of a scope statement. After you create a scope statement, refer to it regularly to avoid the unauthorized changes known as scope creep. • A project’s scope is determined by the kind of problem you’re trying to solve. Technical problems have clear-cut solutions—the kind engineers are traditionally trained to provide. With adaptive problems, things are less clear, with no definite consensus on how to proceed, and with any solution guaranteed to challenge stakeholders’ values and beliefs. Some problems are a mix of both. • All projects occur within multiple contexts—within an organizational context (both yours and the customer’s), a market context, a technical context, and a social context. All of these can change over the life of a project, and in the permanent whitewater of the modern business world, they probably will. A project will necessarily evolve as the project’s context changes. Your job as a project manager is to be on the lookout for externalities that can affect a project’s context. ~Glossary • business case—An “argument, usually documented, that is intended to convince a decision maker to approve some kind of action. The document itself is sometimes referred to as a business case. As a rule, a business case has to articulate a clear path to an attractive return on investment (ROI). At its simplest, a business case could be a spoken suggestion…. For more complex issues, a business case should be presented in a carefully constructed document. A business case document should examine benefits and risks involved with both taking the action and, conversely, not taking the action. The conclusion should be a compelling argument for implementation” (TechTarget n.d.). • context—According to Merriam-Webster, the “situation in which something happens: the group of conditions that exist where and when something happens.” • idea averaging—Taking a little from one idea, and a little from another, and a little from another—without fully committing to any. • linear responsibility chartSee RACI chart. • organizational breakdown structure (OBS)— A description of the project team. It explains “who reports to whom, the details of the hierarchy, and the reporting structure…. Organizational breakdown structures are normally communicated visually through the use of graphs or charts. A project or general manager is listed and underneath the PM several divisions might be created, such as product development, design, materials management, and production” (Bradley n.d.). See also responsibility assignment matrix (RAM), below. • planning bias—The tendency to optimistically underestimate the amount of time required to complete a task. • project charter— A “single, consolidated source of information” (Richter 2014) for project initiation and planning. It describes your current knowledge about the project and includes information such as the names of all stakeholders, a statement of your organization’s needs, the history leading up to the project, the project’s purpose, deliverables, and roles and responsibilities. A project charter is also sometimes called a project overview statement. It’s sometimes helpful to think of the project charter as a contract between the project team and the project sponsors. • project initiation—The early phase in which you lay the groundwork for the entire project. • project overview statementSee project charter. • project scope—All the work “that needs to be done to provide the product or service your project is delivering” (Martinez n.d.). • responsibility assignment matrix (RAM)—A type of organizational breakdown structure in the form of a grid that typically lists project tasks in the first column, and stakeholders across the top row, with tasks assigned to the various stakeholders. You can use it to determine if you have enough resources for a project, and to record who is responsible for what. See also RACI chart. • RACI chart—A type of responsibility assignment (RAM) matrix. Also known as a linear responsibility chart. The name “RACI” is an acronym of “responsible, accountable, consult, and inform.” • stakeholders—The people who will be affected by or who can affect a project. • scope creep—Uncontrolled changes to a project that occur with no corresponding authorized changes in budget and schedule. • scope statement—A document that defines the project’s scope (or requirements). • work breakdown structure (WBS)—A description of the tasks associated with project deliverables, often in the form of a tree diagram. A work breakdown structure “displays the relationship of each task to the other tasks, to the whole and the end product (goal or objective). It shows the allocation of responsibility, and identifies resources required and time available at each stage for project monitoring and management” (Business Dictionary n.d.). • work package— A “group of related tasks within a project. Because they look like projects themselves, they are often thought of as sub-projects within a larger project. Work packages are the smallest unit of work that a project can be broken down to when creating your Work Breakdown Structure (WBS)” (Wrike n.d.).
textbooks/biz/Management/Book%3A_Technical_Project_Management_in_Living_and_Geometric_Order_(Russell_Pferdehirt_and_Nelson)/1.03%3A_Project_Initiation_Scope_and_Structure.txt
Risk comes from not knowing what you’re doing. ―Warren Buffett Objectives After reading this chapter, you will be able to • Discuss issues related to supply chain management and procurement throughout an enterprise • Explain the role of building effective client-supplier relationships in procurement, discuss issues related to procurement waste, and describe the advantages of emphasizing value over price • Describe different types of contracts and the types of behaviors they encourage • Give examples of how procurement issues vary from one context/domain to the next • Discuss issues related to sustainable procurement • List items you need to clarify when working on a proposal or contract The Big Ideas in This Lesson • It’s essential to think strategically about procurement to ensure your project team gets what it needs at the right time, while at the same time building productive, long-term relationships with suppliers. • Contracts and their terms drive behavior, causing people and organizations to behave in specific ways. • Procurement is not a one-size-fits-all process. Vital issues related to proposals and contracts vary greatly one industry to the next, with new types of partnerships emerging to suit changing needs. • 4.1 Procurement’s Role in Supply Chain Management Maintaining a healthy supply chain—that is, cultivating a network of “activities, people, entities, information, and resources” that allows a company to acquire what it needs in order to do business—is a major concern for any effective organization (Kenton 2019). Supply chain management encompasses the planning and management of all activities involved in sourcing and procurement, conversion, and all logistics management activities. Importantly, it also includes coordination and collaboration with…suppliers, intermediaries, third party service providers, and customers. In essence, supply chain management integrates supply and demand management within and across companies. (Council of Supply Chain Management Professionals n.d.) When done well, supply chain management “results in lower costs and a faster production cycle” (Kenton 2019). It is a living-order discipline focused on protecting a supply chain from the evolving threats to which it is vulnerable. For example, here are just a few recent threats to American industries: • National and global politics: In the first half of 2019, tariffs on Chinese imports forced American companies to choose between raising prices or absorbing increased costs. • Production shutdown at key supplier: A 2018 fire at a Michigan parts plant cut off supply of parts for Ford F-150 trucks. • Changing government regulations: New restrictions on hazardous substances imposed by the European Union limited chemicals U.S. companies could import from the EU after 2017. • Extreme weather events: Flooding in Thailand in 2011 shut down computer parts factories, crippling hard drive suppliers worldwide. • Shortage of skilled manufacturing labor: Starting in 2018, American electronics suppliers found that a tight labor market meant they couldn’t produce circuit boards on schedule. As a project manager, you will often have to focus on a core element of the supply chain—procurement. In its simplest usage, the term procurement means acquiring something, usually goods or services. For example, as a project manager, you might might need to procure any of the following: • Commodities: Fuel oil, computer hardware • Services: Legal and financial services, insurance • Expertise: Special technical know-how needed for marketing and communications, public engagement, project design and reviews, or assisting with project approvals • Outcomes: A specified amount of thrust hours produced by a jet engine; a net reduction in energy usage generated by improving a heating system; conformance to a government regulation In the construction field, project managers may spend a good deal of their time managing the entire procurement process, selling goods or services in some situations and purchasing goods or services in others. If that’s your situation, you might have to create proposals for the work you hope to do and then negotiate the contracts that will set the project in motion. On other projects, you might have to review proposals submitted by potential suppliers and then oversee the final contract with the selected supplier. Throughout, you’ll have to navigate the ins and outs of many relationships. By contrast, in manufacturing and product development, project managers often have little to do with procurement. In IT, project management is often closely tied to purchasing and overseeing the implementation of new software products. Whatever your procurement duties are, it’s essential to understand overall expectations and the established processes for procurement throughout your organization. Supply Chain Management: Some History Supply chain management is a full-blown profession, with people pursuing degrees and certificates devoted to the topic. However, in the early days of U.S. commerce, the role of purchasing the goods and services a company needed in order to conduct business was not given much thought (Inman 2015). This was true up until the late 1960s and 1970s, when the oil crisis and a worldwide shortage of raw materials forced business to recognize purchasing as a vital competitive issue. An entirely new type of management, supply chain management, was born. 4.2 Procurement from the Enterprise to the Project Figure 4-1: Within an enterprise, procurement takes place on multiple levels As shown in Figure 4.1, procurement takes place on multiple levels throughout an organization. At the broadest, enterprise level, the term procurement refers to everything an organization does to acquire what it needs to do business. At the project level, procurement refers to everything a project team does to acquire what it needs to complete a project. Adding to the complexity, portfolio and program managers have their own procurement needs, which sometimes conflict with the needs of other managers with the enterprise. A lack of alignment among these various needs can make success for individual projects impossible. Worse, it can sabotage the larger business goals of the entire enterprise. Among other things, procurement includes: • purchase planning • standards determination • specifications development • supplier research and selection • value analysis • financing • price negotiation • making the purchase • supply contract administration • inventory control and stores • disposals and other related functions (BusinessDictionary.com n.d.) In an established organization, a project manager’s duties are simplified by the procurement function, which provides the organizational framework, policies, and procedures for acquiring necessary resources. (See Figure 4.2.) In startups and other less mature organizations, establishing procurement strategies to ensure the organization’s long-term well-being may not yet be a high priority. In that case, procurement may be less well defined and more focused on the project-level., with project managers left to manage resource acquisition on their own. Figure 4-2: The enterprise procurement function provides the organizational framework, policies, and procedures for acquiring necessary resources Project-level procurement sometimes involves getting what you need to complete the project from the project team itself. For example, on a construction project, the team might be responsible for building kitchen cabinets. Other goods and services might be acquired from outside the project team; others from outside the organization, or from inside the organization, from other teams or departments . In some situations, procurement within the organization is a major concern for project managers. (See Figure 4-3.) Figure 4-3: Project-level procurement takes many forms To make good procurement choices for your projects, you need to understand where your project fits within the portfolio and the program. You also need to look at the big picture, and think strategically about procurement, both on behalf of your organization and on behalf of your project team. That means you might need to acquire a product just to maintain a foothold in a tight supply chain. For example, if you’re 75% certain that you’ll need a piece of computer hardware that is currently in short supply, you might choose to go ahead and procure it because you know that not having it when you need it will bring your project to a full stop. Keep in mind that you can’t really think strategically about your organization’s procurement needs until you understand the logistics of procurement in your organization. Among other things, you should make sure you understand the following: • How do you define your requirements to ensure you get what you need? • What are the established processes? • Who has authority to initiate, approve, and manage procurement? • How are changes in scope handled? Who has approval authority? 4.3 Maintaining Procurement Relationships In an ideal world, the contract resulting from a procurement process is a formal expression of a trusting relationship that already exists between two parties. Even in a less than ideal world, to achieve the best possible results, it can be helpful to think of procurement as a relationship-building process, one that can span many years. It is a form of networking that inexperienced engineers might dismiss as mere schmoozing but is in fact a means of identifying and cultivating the people and organizations who can help you complete your existing projects, develop opportunities for new ones, and advance your career over the long term. A conversation you have with a potential client at a conference might lead to a lunch six months later when you both happen to be in the same airport, which could in turn spark an idea for a new project that might only come to fruition half a decade later. Of course, you need to balance the positive focus on building effective relationships with the need to avoid inappropriate preferences for business partners, which can lead to the unethical practices associated with nepotism, such as kickbacks, bribes, overpricing of supplies, and other unethical practices. By working to get to know potential business partners over time, you can find out if their organization’s culture and ethics, as well as their goals and needs, are a good fit for yours. As management consultant Ray Makela explains, this kind of knowledge can be vital in determining if a proposal is a good fit for your company: Culture fit and ethics are difficult to assess in an RFP, but are one of the most important “intangibles” that can make a difference in who the organization engages with initially and who they continue to do business with in the future. Understanding the culture of the organization and demonstrating behavior that indicates ethics, collaboration, and communication can go a long way to cementing a relationship for the long term. (n.d.) Even if you are not currently responsible for any procurement tasks, you’d be wise to get to know the people in your organization who do manage procurement. In an article for Supply Chain Management Review, Paul Mandell discusses the unexpected cost-cutting benefits of cultivating relationships within your organization: “Once you have a strong rapport with peers throughout the company, it is increasingly likely that you will gain insight into potential economies that were not otherwise obvious to you” (2016). If you lack the people skills for creating and nurturing these types of relationships, you might want to focus on improving your emotional intelligence, as discussed in Lesson 5. Repairing Damaged Relationships Despite your best efforts, sometimes a relationship with a trusted business partner can go awry. Economic downturns can be especially hard on customer-supplier relationships. In an article for Supply Chain Quarterly, Justin Brown gives some tips on repairing damaged procurement relationships: Step 1: Acknowledge past mistakes The most important part of this first step is to identify and acknowledge the mistakes that were made on both sides…. Once you have determined that the relationship is worth repairing or saving, it is time to pursue open and honest communication…. Step 2: Find the real source of the problem The most delicate part of this process involves identifying the root cause of the problems. Bringing in a neutral third party to help both sides review the current relationship and past experiences is one way to maintain objectivity during these discussions…. Step 3: Identify and implement corrective actions …. Observe the impact of these corrective actions on the original symptoms (the “effect”) and ensure that the resulting improvements can be objectively measured and quantified…. It’s wise to avoid subjective measurements, which may invite interpretations that lead to more disagreements and conflicts…. Step 4: Monitor and maintain the relationship After implementing corrective actions, you’ll need to conduct management reviews in which progress is discussed, milestones are recognized, and changes to planned milestones are decided upon when necessary…. To improve the likelihood of success, ensure that there is leadership support from both customer and supplier. (2010) The complete article, which you can read here, is filled with helpful ideas about restoring the relationships you need to keep doing business: “4 Steps to Rebuilding Customer-Supplier Relationships. 4.4 Reducing Procurement Waste If you’ve ever gone to the trouble of writing a proposal that ended up ignored on a manager’s desk, or negotiating a contract only to find that the relevant project was cancelled at the last minute, you’ve experienced the waste of time and effort that is often associated with procurement. Indeed, the plague of procurement waste infects all industries. According to Victor Sanvido, former board chairman of the Lean Construction Institute, the potential for procurement waste in the construction industry alone is enormous. In a speech at the National Building Museum, he argued that procurement generates “the single biggest waste in our industry” (Dec. 4, 2013). Unfortunately, as a report for the Project Management Institute explains, “in many business sectors the contribution of procurement is not fully realized or integrated into the strategic considerations of the business” (MacBeth et al. 2012). Procurement as a management-level profession is still relatively new, without the institutional backing and knowledge found in other management fields. Indeed, a study conducted for the Project Management Institute found that even the Institute’s own flagship publication, PMOBOK® Guide and Standards, pays woefully insufficient attention to procurement as a competitive strategy (MacBeth et al. 2012). It’s no surprise, then, that the potential for waste in the procurement process often goes unrecognized. What sorts of things should a Lean-minded project manager look for in the procurement process? Patrick Williams, of Capgemini Consulting, discusses some common causes of waste, including: • Contract Negotiation: How many times is the document exchanged between the supplier, legal counsel, and the contracting/sourcing agent? Are there ways to reduce these exchanges? Are these all necessary? • Approval Processes: How are your sourcing, contracting, and purchase order approval workflows managed? Are employees routinely waiting for manager approval to process or finish work? Are there technology or policy changes that could streamline these approvals without sacrificing controls? • Sourcing/Purchasing/Contracting: How many reviews take place on a given contract, sourcing event, or purchase order? Are all required? Can authority be tiered or increased to reduce unnecessary oversight? (2013) The Costco Approach Costco is a company that has thrived by cultivating long-term relationships with its suppliers, rather than focusing on extracting every last cent of profit from them. They believe they will get better quality and better service over the long-term with strong supplier relationships. An article in Retail Merchandiser explains the company’s strategy as follows: “By taking care of its vendors, Costco remains top of mind with them when new deals become available. The company values the long-term relationships it deals with as well, and any suggestions to terminate a vendor relationship require in-depth analysis” (Retail Merchandiser Magazine 2012). As in all aspects of technical project management, success in procurement is directly dependent on a team’s ability to recognize and respond to the ever-changing circumstances of the living order. In the next two sections, we’ll look at two ways to eliminate procurement waste: collaboration and emphasizing value over price. When companies and their suppliers stake out adversarial positions, with each seeking to claim the best possible deal over the short term, waste is inevitable. Victor Sanvido laments this type of behavior as a prime cause of procurement waste: “The owner will stop the job for three to six months to decide who they want to put on their team. They’ll make you go through a series of exercises that have no outcome on the end of the job.” As a result of these pointless exercises, he says, “90% of what is generated in procurement is thrown away” (Dec. 4, 2013). It’s far more efficient for companies to collaborate with their potential suppliers early on in the project, soliciting their ideas on design, scheduling, manufacturing processes, logistics, and so on. This two-way conversation between company and supplier should continue long after the contracts are signed, with the goal of creating long-term alliances that benefit all parties. An article in Supply Chain Quarterly argues that establishing these types of reliable procurement alliances is essential to effective supply chain management: Best-in-class companies work closely with suppliers long after a deal has been signed. In most circles today, this is called “supplier relationship management.” But that implies one-way communication (telling the supplier how to do it). Two-way communication, which requires both buyer and seller to jointly manage the relationship, is more effective. A more appropriate term for this best practice might be “alliance management,” with representatives from both parties working together to enhance the buyer/supplier relationship. The four primary objectives of an effective alliance management program with key suppliers include 1. Provide a mechanism to ensure that the relationship stays healthy and vibrant 2. Create a platform for problem resolution 3. Develop continuous improvement goals with the objective of achieving value for both parties 4. Ensure that performance measurement objectives are achieved With a sound alliance management program in place, you will be equipped to use the talents of your supply base to create sustained value while constantly seeking improvement. (Engel 2011) The 2014 Project Management Institute Project of the Year provides an excellent example of what can be achieved by a collaborative approach to procurement. Rio Tinto Alcan Inc., a global leader in aluminum mining and production, began planning a revolutionary aluminum smelter that promised to generate 40 percent more aluminum at a lower cost and with fewer emissions than any other current technology. According to an article in PM Network, the massive project “included construction of 38 smelting pots, with an aluminum production capacity of 60,000 tons per year, a very large electrical substation, and a gas treatment center” (Jones 2014). Before the company could seriously contemplate construction, extensive research was necessary to prove that the new technology would in fact work. This research had the added benefit of illuminating the project’s potential pitfalls. It was clear that teamwork and open communication were key to avoiding them: With more than 100 equipment suppliers and 50 installation contractors working on-site at the same time, the project team knew it needed to tackle integration and communication issues up front. Its preliminary studies showed that people had to understand the project’s strategic goals if the team wanted them to identify problems before they wreaked havoc on the schedule and the budget. (Jones 2014) Project director Michel Charron describes his procurement strategy like this: Before giving anyone a contract, we would meet them and explain the strategic goal we were pursuing. The hardest part was making sure they had the right attitude and would help build the culture we wanted for this project. (Jones 2014) To help build an effective team, the project leaders “outlined clear roles and responsibilities and looked for opportunities to improve the flow of information among teams.” Charron summed up his overall philosophy: “Everybody has a little bit of the answer. You need to have the whole team working together to achieve something. So we made sure that they could have a good understanding of what others were doing” (Jones 2014). 4.5 Value over Price This 3.5-minute video describes a finalist for the Project Management Institute’s Project of the Year Award. By focusing on the value they wanted the building to deliver, rather than what they wanted the building to be, the team was able to deliver the project ahead of schedule and under budget. https://youtube.com/watch?v=ZfYU-KnR1zw One important source of waste in the procurement process is an inordinate focus on price rather than value. In the traditional, geometric order approach to proposals and contracts, price is paramount. More than anything else, sellers aim for the highest possible price for their services. In the least effective version of geometric procurement, managers look for the lowest possible price for each individual purchase. Buying incrementally at the lowest price generally results in higher overall project costs and may lead to other unintended consequences such as lower-than-expected performance. A better geometric practice is to seek the lowest total cost of ownership (TCO), which includes both direct and indirect costs associated with the product or services. (For a more complete definition of the term, see the following article: “How to Find Total Cost of Ownership (TCO) for Assets and Other Acquisitions.” The effectiveness of a TCO approach is lessened in a competitive bidding situation; ideally, you would combine TCO with an emphasis on building long-term relationships with high-quality, reliable suppliers. The Lean, living order approach to proposals and contracts emphasizes a more expansive total cost of ownership calculation that emphasizes value over price. The benefits and overall usefulness of a product or service are considered more important than its price in dollars and cents. From the supplier’s point of view, a long-lasting relationship that allows both parties to thrive is often far more valuable than negotiating a high price in one particular situation. However, it is essential to avoid creating dependencies that inhibit healthy competition. According to Supply Chain Management Quarterly, businesses are finally beginning to grasp the importance of emphasizing value over price: For significant spend areas, procurement teams at best-in-class companies are abandoning the outmoded practice of receiving multiple bids and selecting a supplier simply on price. Instead, they consider many other factors that affect the total cost of ownership. This makes good sense when you consider that acquisition costs account for only 25 to 40 percent of the total cost for most products and services. The balance (and majority) of the total comprises operating, training, maintenance, warehousing, environmental, quality, and transportation costs as well as the cost to salvage the product’s value later on. (Engel 2011) Project managers working on government-funded projects face special procurement challenges. When selecting engineering firms, governments will sometimes allow for a two-stage process, in which they first identify best-qualified engineering firms and then, from among those firms, accept the lowest-priced bid. However, government project managers are sometimes required by law or politics to accept the lowest price. If you find yourself in that situation, whether as a supplier or purchaser, consider making the case that the ultimate price of the project depends on broader, life-cycle costs, including operational and disposal costs. For example, the cost of a new parking lot doesn’t just include the initial cost of building the lot. It also includes maintenance and, eventually, demolition when the aging lot is no longer safe and useful, or when the owner finds a more profitable use for the land. Boeing’s Procurement Nightmare The Boeing 787 Dreamliner is a lightweight passenger jet that, thanks to its pioneering composite frame, uses 20 percent less fuel than the planes they are designed to replace. It has been “the darling of aviation enthusiasts around the world since the first version debuted in 2011. Its lightweight, fuel-saving super strong carbon fiber materials and other cutting-edge design features were touted as the future of the airline industry” (Patterson 2015). However, it’s been plagued by billions of dollars in cost overruns, production delays, and serious safety problems, including fires tied to the lithium ion batteries that are key to the plane’s vaunted energy efficiency. In 2013, a battery fire led to a worldwide grounding of all Boeing 787 Dreamliners. Much of what went wrong with the Boeing 787 can be traced to poor procurement practices—in particular outsourcing. To save time and money, and in response to political pressures, the company set out to procure the necessary parts from many companies in many countries. (See Figure 4-4.) “Boeing enthusiastically embraced outsourcing, both locally and internationally, as a way of lowering costs and accelerating development. The approach was intended to ‘reduce the 787′s development time from six to four years and development cost from \$10 to \$6 billion.’ The end result was the opposite” (Denning 2013). The full story of the Boeing 787 Dreamliner debacle is long and complicated. But it all comes down to two things: a failure to collaborate and a failure to focus on value over price. Figure 4-4: Lack of alignment among all levels of procurement can be problematic, as happened with the Boeing 787 Dreamliner 4.6 From RFP to Contract Now let’s zero in on the portion of the procurement process that is a special focus of project managers: proposals and contracts. After an idea makes it through the project selection process and becomes a funded project, an organization typically issues a request for proposal (RFP), which is a “document that describes a project’s needs in a particular area and asks for proposed solutions (along with pricing, timing, and other details) from qualified vendors. When they’re well crafted, RFPs can introduce an organization to high-quality vendor-partners and consultants from outside their established networks and ensure that a project is completed as planned” (Peters 2011). The exact form of an RFP varies from one industry to the next and from one organization to another. But ideally, an RFP will include the items listed in Appendix 2.1 of Project Management: The Managerial Process, by Erik W. Larson and Clifford F. Gray. You can also find many templates for RFPs on the web. In response to an RFP, other organizations submit proposals describing, in detail, their plan for executing the proposed project, including budget and schedule estimates, and a list of final deliverables. Officially, the term proposal is defined by Merriam-Webster as “something (such as a plan or suggestion) that is presented to a person or group of people to consider.” Depending on the nature of your company, this “something” might consist of little more than a few notes in an email, or it might incorporate months of research and documentation, costing hundreds of thousands of dollars to produce. When creating a proposal, you should seek to clearly understand and address your client’s needs and interests, convincingly demonstrate your ability to meet their needs (quality, schedule, price), and prepare the proposal in a form that meets requirements. After reviewing all submitted proposals, the organization that issued the RFP accepts one of the proposals, and then proceeds with negotiating a contract with the vendor. The term contract is more narrowly defined as “an agreement with specific terms between two or more persons or entities in which there is a promise to do something in return for a valuable benefit known as consideration” (Farlex n.d.). As with proposals, however, a contract can take many forms, ranging from a submitted invoice (which can serve as a binding agreement) to several hundred pages of legal language. Contracts and the Behaviors They Encourage Contracts and their terms drive the behavior of everyone involved. Ideally, a contract is the expression of a trusting relationship between two parties. Such a contract should encourage stakeholders to work together to ensure overall project success and maximize value, rather than spurring stakeholders to optimize their interests. To understand how contracts can affect behavior, it helps to understand the varieties of contracts you might encounter, and the situations in which they can be useful. The two basic varieties are fixed-price and cost-plus: • fixed-price: An agreement in which the contractor or seller “agrees to perform all work specified in the contract at a fixed price” (Larson and Gray 2011, 451). • cost-plus: An agreement in which the contractor or seller “is reimbursed for all direct allowable costs (materials, labor, travel) plus an additional fee to cover overhead and profit. This fee is negotiated in advance and usually involves a percentage of the total costs” (Larson and Gray 2011, 452). A similar arrangement, in which costs with overhead and profit are billed as incurred, is sometimes referred to as time and materials. As shown in Figure 4-4, fixed price presents the greatest risk for the contractor, whereas cost-plus imposes the greatest risk on the client. However, both can be beneficial to all parties in the right situations. For example, a fixed-price contract can be beneficial to both parties if the scope is clearly defined, and the costs and schedule are predictable. But in more uncertain situations, when estimating costs is difficult, the contractor takes on the risk of agreeing to a lump-sum price that might turn out to be far too low given changing market conditions or other externalities. On the other hand, cost-plus contracts can impose an excessive risk on the client, because “the contract does not indicate what the project is going to cost until the end of the project.” Furthermore, such an arrangement imposes little “formal incentive for the contractors to control costs or finish on time because they get paid regardless of the final cost” (Larson and Gray 2011, 452). Figure 4-5: The Contractor and client take on varying amounts of risk, depending on the type of contract Many variations on fixed-price and cost-plus contracts have been devised to modify the risk assumed by both parties. These variations typically involve incentives and penalties that motivate the contractor to work quickly, and keep costs under control, and that allow for increases in labor and materials costs, or other expenses. For a detailed summary of contracts commonly used in project management, see this blog post: “PMP Study: Types of Contracts.” For some real-world examples of incentives and penalties, see this list of best practices used by the U.S. Environmental Protection Agency for solid waste removal contracts: “Contracting Best Practices: Incentives and Penalties.” When negotiating a contract, most people and organizations tend to focus on minimizing their own risk. As result, it’s easy to lose track of the project’s primary purpose: delivering value to the client. Generally speaking, contracts that promote the most equitable allocation of risk create the most value. Such contracts also tend to encourage the best possible behavior on both sides, whereas inequitable contracts tend to bring out the worst in everyone. For example, if a contractor agrees to a fixed-price contract that presumes a ready supply of inexpensive, but high-quality roofing material, only to see the price of shingles go sky-high, the contractor might be tempted to cut corners, and substitute a cheaper material. External and Internal Contracts In the past, RFPs, proposals, and contracts were originally used to solicit bids from external organizations, but it’s now common for one department to use RFPs, proposals, and contracts to secure help on a project from other departments in the same organization. Increasingly, organizations distinguish between external contracts—that is, contracts between an organization and external suppliers—and internal contracts—that is, memorandums of agreement between departments within an organization. Unlike an external contract, an internal contract is not designed to stand up to intense legal scrutiny and is simply a clear explanation of an agreement between two parties. A blog post for the consulting firm NDMA explains the advantages of the inevitable back-and forth negotiation of the contracting process, which can be an opportunity for the type of communication so necessary in living order, with both parties articulating their vision of a project. This is true of both external and internal contracts: Contracting is not a waste of time, not a bureaucratic ritual. The minutes spent working out a mutual understanding of both the customer’s and the supplier’s accountabilities at the beginning of a project can save hours of confusion, lost productivity, and stress later. Furthermore, contracts are the basis for holding staff accountable for results. They are not wish-lists; they’re firm commitments. Staff must never agree to a contract unless they know they can deliver results. Internal contracts also hold customers accountable for their end of the deal. For example, on an IT development project, clients may have to agree to things like providing their people time to work with the development team, negotiating rights to data with other clients, and doing acceptance testing. By agreeing on customers’ accountabilities up front, projects won’t be delayed by clients who are surprised by unexpected demands; and staff won’t be blamed if clients hold up a project. (NDMA n.d.) A service-level agreement (SLA) is an example of a type of contract that can be external (for example, between a network service provider and its customers) or internal (for example, between an IT team and the departments for which it provides services). An SLA “documents what services the provider will furnish and defines the performance standards the provider is obligated to meet” (TechTarget n.d.). SLAs have evolved in living order as a way to create a blueprint for services in a world of rapidly changing technology: SLAs are thought to have originated with network service providers, but are now widely used in a range of IT-related fields. Companies that establish SLAs include IT service providers, managed service providers, and cloud computing service providers. Corporate IT organizations, particularly those that have embraced IT service management (ITSM), enter SLAs with their in-house customers (users in other departments within the enterprise). An IT department creates an SLA so that its services can be measured, justified, and perhaps compared with those of outsourcing vendors. (TechTarget n.d.) A blog post for Wired makes the case for using SLAs for any undertaking involving cloud computing, which is perhaps the ultimate living order situation, involving ever-changing technologies and huge geographical distances. A well-conceived SLA can serve as a roadmap over this bumpy terrain: In order to survive in today’s world, one must be able to expect the unexpected as there are always new, unanticipated challenges. The only way to consistently overcome these challenges is to create a strong initial set of ground rules, and plan for exceptions from the start. Challenges can come from many fronts, such as networks, security, storage, processing power, database/software availability or even legislation or regulatory changes. As cloud customers, we operate in an environment that can span geographies, networks, and systems. It only makes sense to agree on the desired service level for your customers and measure the real results. It only makes sense to set out a plan for when things go badly, so that a minimum level of service is maintained. Businesses depend on computing systems to survive. In some sense, the SLA sets expectations for both parties and acts as the roadmap for change in the cloud service—both expected changes and surprises. Just as any IT project would have a roadmap with clearly defined deliverables, an SLA is equally critical for working with cloud infrastructure. (Wired Insider n.d.) The blog post goes on to list essential items to cover in an SLA. You can read the entire post here: “Service Level Agreements in the Cloud: Who Cares? 4.7 Different Domains, Different Approaches to Procurement Procurement is not a one-size-fits-all process. Different situations require different approaches to soliciting bids, submitting proposals, and negotiating contracts. If you’re involved in something simple, like buying a car for your personal use, you probably will want to focus almost exclusively on price and terms. You’d be wise to shop around, perhaps even traveling to another city to get the best possible deal. When purchasing a software package for use on your personal computer, you’re probably safe taking the same approach. But if you are responsible for buying a customized software solution for a specialized project, price is often less important than ensuring that you buy from a vendor who can provide a reliable implementation of the software. That means you need to get to know the vendor, and perhaps talk to some of the vendor’s clients, to make sure you’re dealing with a company you can rely on. In manufacturing, single-sourcing is the practice of using one supplier for a particular product. Many large manufacturers are finally coming to terms with the extreme downsides of this form of procurement, in which a single random event can bring manufacturing to a halt. For example, a fire at a seat supplier forced a Jaguar Land Rover factory, which relied entirely on that one supplier, to close for several weeks. In 2012, floods in Thailand affected 70% of the worlds hard drive manufacturing capacity (DatacenterDynamics 2011). Political events can also cut off access to an individual supplier, as can natural disasters. The problem with limited supply options is magnified when more than one organization relies on a particular supplier, as has been the case recently with airbags made by Takata, one of only three large airbag suppliers in the world (Sedgwick 2014). Some organizations prefer to buy software or equipment from one entity, and then hire another company to get it up and running. Others subscribe to the “one throat to choke” philosophy, preferring to purchase everything from a single vendor. That way, if something goes wrong, it’s clear who’s to blame. The phrase “one throat to choke” was coined in 2000 by Scott McNealy, CEO of Sun Microsystems, as a way to sum up the benefits of a new alliance between major players in the IT world. According to Coupa Software CEO Rob Bernshteyn, the phrase “only half-jokingly referred to the intended benefit of the alliance to customers: providing accountability within multi-partner, multi-million-dollar enterprise software deployment…. It spoke to a level of customer frustration that had reached the boiling point, and having one throat to choke actually represented an improvement over the status quo.” McNealy’s statement was a sad commentary on the state of customer support in the IT world back in the late ‘90’s. Things have improved dramatically since then in the IT world. In some situations, buying software or equipment from the same vendor that implements it might be the best approach, but in other situations, working with multiple vendors is preferable. Bernshteyn argues that the best approach to procurement focuses on how to achieve success rather than mitigating failure. When you work with multiple vendors, he argues, you have the opportunity to learn from one vendor and push other vendors to do better. Ultimately, as is so often the case with procurement, it comes down to relationships. “It might well be that one vendor is the right choice, or two or three. But the right choice is always the one where you find yourself thinking: I see the opportunity that we have together. I see how our views on the world match in some way. I see how we can work together with integrity toward a shared vision of the future. Instead of thinking about ‘one throat to choke,’ you’re thinking about more hands to shake and more backs to slap in shared victory” (2013). In the private sector, companies are free to take risks in procurement, trying out innovative approaches, working with whatever vendors they choose, without having to explain their every move. But if you are working in the public domain, things are different. When soliciting bids or submitting proposals for government projects, you’ll have to deal with strict regulations designed to ensure transparency and minimize risk. You might be tempted to roll your eyes at what seems like a stodgy, rule-bound approach to getting things done. But public contracts are paid for with public funds, and the public does not like to have its tax dollars wasted. A position paper from The Institute for Public Procurement makes the case for openness in public procurement: Procurement in the public sector plays a unique role in the execution of democratic government. It is at once focused on support of its internal customers to ensure they are able to effectively achieve their unique missions while serving as stewards of the public whose tax dollars bring to life the political will of its representative governing body. The manner in which the business of procurement is conducted is a direct reflection of the government entity that the procurement department supports. In a democratic society, public awareness and understanding of government practice ensures stability and confidence in governing systems…. The manner in which government conducts itself in its business transactions directly affects public opinion and the public’s trust in its political leaders. (Institute for Public Procurement 2010) The sums at stake in public procurement are significant—ranging between 15 to 30 percent in many countries (United Nations Office on Drugs and Crime 2013, 1). That means the public pays a high price for bribery, conflicts of interest, and other forms of corruption: These costs arise in particular because corruption in public procurement undermines competition in the market and impedes economic development. This leads to governments paying an artificially high price for goods, services, and works because of market distortion. Various studies suggest that an average of 10-25 per cent of a public contract’s value may be lost to corruption. Applying this percentage to the total government spending for public contracts, it is clear that hundreds of billions of dollars are lost to corruption in public procurement every year. (United Nations Office on Drugs and Crime, 1) In large infrastructure projects, especially energy and transportation projects, one well-established way to ensure an organization gets its money’s worth far into the future is a DBOM (Design, Build, Operate, Maintain) partnership, in which a private organization builds a facility, and operates it on behalf of the public for as long as 20 years. DBOM partnerships, a high-functioning variation on the one-throat-to-choke approach, have been used since the mid-1980’s to construct and operate waste-to-energy projects that transform trash into electrical power. These arrangements can span nations and multiple companies, as is the case with a recent agreement between a Swiss energy firm, Hitachi Zosen Inova, the Australian firm New Energy Corporation, an international investment firm called Tribe Infrastructure Group, and the city of Perth, Western Australia (Messenger 2017). A more cutting-edge version of this type of partnership is DBOOM (Design, Build, Own, Operate, Maintain), which makes it possible for public or private organizations to finance and operate huge undertakings like infrastructure, energy, or transportation projects. On the public side, DBOOM has been used to finance projects like building university campuses or public utilities. On the private side, it is a good alternative for financing projects like data centers, corporate campuses, or healthcare facilities. Such projects can be massively expensive, and face an array of risks, including fluctuating energy markets, changeable availability of resources (including trash, in the case of waste-to-energy facilities), local and national political upheavals (which can affect tax revenues), and construction problems related to new technology. DBOM and DBOOM partnerships facilitate risk-sharing, making it more likely that large-scale projects can proceed. These partnerships can be especially useful in projects that offer sustainability benefits to the public. For example, in the case of municipal waste-to-energy facilities, a DBOM partnership provides the tax advantages of municipal financing while consolidating responsibility for design, construction, and operation to a private vendor. Certainly, new and even more creative ways to finance large-scale projects will be devised over the coming decades. As a project manager, you don’t need to keep track of every variation, especially if you work in IT or product development, where these types of partnerships likely have little to do with your day-to-day work. But it’s good to be aware that they exist because they demonstrate the vast possibilities for procurement and contracts in living order. More and more, procurement is about more than simply signing a contract and delivering a specific product or service. Procurement unfolds in the ever-changing living order, which means change is the new normal. 4.8 Sustainable Procurement The ultimate goal of public procurement is serving the public’s needs, so it’s good news that governments have been leaders in the field of sustainable procurement, which emphasizes goods and services that minimize environmental impacts while also taking into account social considerations, such as eradicating poverty, reducing hazardous wastes, and protecting human rights (Kjöllerström 2008). This report, published by the United Nations, is an excellent introduction to the topic of sustainable procurement in the public sector: “Public Procurement as a Tool for Promoting More Sustainable Consumption and Production Patterns.” Although sustainable procurement is primarily associated with public procurement, private organizations have made significant strides in this area as well. Motivations for going green in the private sector vary, but one recurring theme is that customers and employees see sustainable companies as more prestigious, and so are proud to be associated with them (Network for Business Sustainability 2013). Indeed, many companies are finding that recruiting top-notch employees depends on cultivating a reputation as an organization focused on sustainability. This is particularly true for millennials, who “want to work for companies that project values that align with their own,” with environmental sustainability “gaining ground as a key value for the younger generation” (Dubois 2011). This was one major motivation behind the ongoing transformation of Ford’s Dearborn, Michigan headquarters, a massive DBOOM project which you can read about here: “Transforming Our Campus to Transform the Future.” 4.9 Agile Procurement Robert Merrill, a Senior Business Analyst at the University of Wisconsin-Madison, and an Agile coach, points out that “many procurement processes naturally follow or even mandate a negotiation-based approach that is directly at odds with the kind of living order thinking found in the Agile Manifesto, which emphasizes ‘collaboration over contract negotiation’” (pers. comm., June 15, 2018). Nevertheless, some organizations and governments are beginning to rethink their procurement processes in hopes of making them more Agile and, as a result, less costly. One interesting example is an on-going overhaul of the State of Mississippi’s child welfare information system. After some initial missteps, the state decided to emphasize identifying and contracting with many qualified vendors on portions of the project, rather than attempting to hire a single entity to create the entire information system. A blog post published by 18F, an arm of the U.S. government’s General Services Administration, which provided guidance on the project, describes Mississippi’s new approach to an age-old software development dilemma: Mississippi’s initial response to solving this problem was a classic waterfall approach: Spend several years gathering requirements then hire a single vendor to design and develop an entirely new system and wait several more years for them to deliver a new complete solution. According to the project team at Mississippi’s Department of Child Protection Services, this “sounds like a good option, but it takes so long to get any new functionality into the hands of our users. And our caseworkers are clamoring for new functionality.” Instead, they’re taking this opportunity to build the first Agile, modular software project taken on within Mississippi state government, and they’re starting with how they award the contracts to build it. Once this pool of vendors is selected, instead of awarding the entire contract to a single company, Mississippi will release many smaller contracts over time for different sections of the system. This is great for Mississippi. Inspired by the Agile approach, they’ll only need to define what needs to be built next, rather than defining the entire system all up front. This is also great for vendors. Smaller contracts mean smaller vendors can compete. Small businesses can’t manage or deliver on large multi-million dollar software development contracts, and so are often precluded from competing. But with this approach, many contracts could end up in the single-digit millions (or less!). Smaller contracts means more small businesses can compete and deliver work, resulting in a larger and more diverse pool of vendors winning contracts and helping the state. Approaching the project in a modular, Agile fashion can be more cost effective and less risky than a monolithic undertaking. To do it, they plan to take an approach called the “encasement strategy,” under which they will replace the system slowly over time while leaving the legacy system in place. It will work like this: The old database will have an API layered on top of it and then a new interface will be built, one component at a time, without risking the loss of data or major disruptions to their workflow. Each module will be standalone with an API interface to interact with the data and the other modules. If they decide to replace a module five years from now, it won’t really impact any of the others. (Cohn and Boone 2016) 4.10 Communication 101 The Beauty of Straight Talk One cause of procurement waste is the convoluted language used in contracts. Proponents of plain language contracts make the case for simple, straight-forward agreements that anyone with a high school education can understand. Such easy-to-read, jargon-free documents minimize disputes and shorten negotiations because the parties no longer have to spend weeks huddled with expensive lawyers, parsing paragraphs of tedious definitions and outmoded grammar. This Harvard Business Review article describes the successful implementation of plain language contracts at GE Aviation: “The Case for Plain-Language Contracts.” Of course, all types of business documents can benefit from simplification. The less jargon, the better. If you’re not sure you can recognize jargon when you see it, the Plain English Campaign, a British organization that campaigns against “gobbledygook, jargon, and misleading public information” can help. Click this link to get started generating some gobbledygook: Gobbledygook Generator. After you read a few examples, look for and eliminate similarly meaningless prose from your own writing. As a project manager, you might be responsible for writing RFPs for your organization’s projects, or proposals in response to RFPs publicized by other organizations. You might also be responsible for drafting parts of a contract—for example language describing the scope of work. At the very least, you will need to be conversant enough with contract terminology so that you can ensure that a contract proposed by your organization’s legal department adequately translates the project requirements into legal obligations. Whatever form they take, to be useful, RFPs, proposals, and contracts must be specific enough to define expectations for the project, yet flexible enough to allow for the inevitable learning that occurs as the project unfolds in the uncertain, living order of the modern world. All three types of documents are forms of communication that express a shared understanding of project success, with the level of detail increasing from the RFP stage to the contract. Throughout the proposal and contract stages, it’s essential to be clear about your expectations regarding: • Deliverables • Schedule • Expected level of expertise • Price • Expected quality • Capacity • Expected length of relationship (short- or long-term) Take care to spell out: • Performance requirements • Basis for payment • Process for approving and pricing changes to the project plan • Requirements for monitoring and reporting on the project health At minimum, a proposal should discuss: • Scope: At the proposal stage, assume you can only define about 80% of the scope. As you proceed through the project you’ll learn more about it and be better able to define the last 20%. • Schedule: You don’t necessarily need to commit to a specific number of days at the proposal stage, but you should convey a general understanding of the overall commitment, and whether the schedule is mission-critical. In many projects, the schedule can turn out to be somewhat arbitrary, or at least allow for more variability than you might be led to believe at first. • Deliverables: Make it clear that you have some sense of what you are committing to, but only provide as many details as necessary. • Cost/resources: Again, make clear that you understand the general picture, and provide only as many specifics as are helpful at the proposal stage. • Terms: Every proposal needs a set of payment terms, so it’s clear when payments are due. Unless you include “net 30” or “net 60” to a proposal, you could find yourself in a situation in which customers refuse to part with their cash until the project is complete. • Clarifications and exclusions: No proposal is perfect, so every proposal needs something that speaks to the specific uncertainty associated with that particular proposal. Take care to write this part of a proposal in a customer-friendly way and avoid predatory clarifications and exclusions. For example, you might include something like this: “We’ve done our best to write a complete proposal, but we have incomplete knowledge of the project at this point. We anticipate working together to clarify the following issues”—and then conclude with a list of issues. If you are on the receiving end of a proposal, remember a potential supplier probably has far more experience than you do in its particular line of business. Keep the lines of communication open and engage with suppliers to use their expertise to help refine deliverables and other project details. Here are a few tips to keep in mind as you work on contracts: • Standard vs. Custom: Almost every industry has a set of contractual language that’s been tested through the courts. To the extent that you are able, use that language. With custom contract language, the likelihood that you will be forced to arbitrate or adjudicate to resolve disputes goes way up, because there’s no case law to refer to. You never really know how enforceable a custom contract is until you have to enforce it. Whenever possible, stick with standard contracts. • Appendices: Contracts almost always have appendices spelling out details such as applicable regulations, licensing agreements, and payment schedules, just to name a few. These are typically cut and pasted from other contracts. Often the person creating a contract neglects to adequately edit the appendices to ensure that they adequately articulate the project issues. If you use contract appendices, make sure they are properly edited to clearly express the issues related to your project. • Conflicts: Contracts often contain internal inconsistencies, which is why most contain a severability clause that says, essentially, “If something is wrong in this contract, everything else still applies.” You can help make such a clause unnecessary by asking someone to read a contract for you and repeat it back to you in plain English. This can go a long way toward clarifying who is obligated to do what, and to draw your attention to any inconsistencies within the contract. • Predatory language: The older your organization, the more likely its contracts include language that addresses every unique and anomalous event that has ever happened in the history of your company. That language tends to accumulate in contracts and tends to be harsh. But keep in mind that the United States’ system of law does not allow you to enforce unreasonable contract terms on someone, even though they have signed the contract. Predatory language in a contract might give you comfort at the time that a supplier signs it, but if the contract is adjudicated, it may not hold up in court. Generally, in the United States, we do not use contract law as punishment. We use it as a means to arbitrate decisions. 4.11 Lean Procurement: Even the Best Sometimes Get It Wrong More on the Risks of Single-Sourcing You might think Toyota’s procurement debacle would have put American car manufacturers on alert. Yet in 2018, after a fire at an auto parts plant in Michigan, Ford had to halt production of the F-Series pickup at two different plants. The trucks “generate most of Ford’s profits,” and are the top-selling vehicles in the United States, so shutting down production naturally had a huge effect on Ford’s bottom line. After the fire, Joe Hinrichs, president of Ford’s global operations, said, “We have to rebuild the whole supply chain. It’s really a day-to-day, hour-to-hour situation. We have a plan developed on how to get production started back up, but it’s going to take some time to make that happen” (Naughton and Rauwald 2018). You can read more about this single-sourcing catastrophe here: “Ford Weighs Halting F-150 Output After Supplier Fire.” To learn about Ford’s remarkable recovery, see this article: “Ford is Resuming F-150 Pickup Production Following Supplier’s Fire.” In retrospect, from a Lean perspective, procurement debacles like Boeing’s Dreamliner disaster might seem inevitable. In fact, procurement failures can be hard to foresee. You can probably identify smaller, unexpected procurement failures in projects you’ve worked on. And even the most Lean-enabled company of all time, Toyota, experienced procurement difficulties in the aftermath of Japan’s 2011 earthquake and tsunami. The company’s direct suppliers were not seriously affected by the disaster. However, the company was taken off guard by other procurement difficulties, as explained by Jeffrey K. Liker and Gary L. Convis in The Toyota Way to Lean Leadership: As Toyota quickly found, many of the basic raw materials its suppliers depend on came from the northeast of Japan, near the epicenter of the disaster. Most disturbing to Toyota, it discovered they knew little about the affected companies that were suppliers to suppliers and thus not directly managed by Toyota. Toyota worked with its suppliers, made some direct visits, and put together a map of all the suppliers affected by the disaster. It found there were 500 parts that it was not able to procure just after the March 11 quake. (xx) Toyota put its teams of engineers to work helping its vendors recover from the catastrophe by removing debris, repairing equipment, and so on. By early May, the company was unable to procure only 30 parts, a huge improvement over the 500 unavailable parts immediately after the earthquake. Still, Toyota was forced to halt a great deal of production in Japan and around the world until it resolved its procurement problems, taking a huge financial hit as a result. In the aftermath, the company’s leaders looked inward to determine how to avoid similar problems in the future: The problem was that the suppliers’ sources were invisible to Toyota. Some of Toyota’s suppliers were relying on a single source or two sources in the same geographic area. Toyota had to dig deeper into the supply chain to ensure that a single natural disaster could not bring global production to a halt. But a bigger lesson was the benefit of teams working together across divisions and across regions. Throughout the world, each region needed to check in daily on the condition of parts and make decisions about priorities for building vehicles…. The daily communication and cooperation needed to deal with this severe challenge both tested the company and strengthened global cooperation. (xxvii) Once again, we see the vital role of learning in Lean project management. As Toyota learned more about its suppliers’ suppliers, the company was able to strengthen and expand its supply chain, ensuring that all parts worked together reliably and efficiently. ~Practical Tips The subject of procurement is vast, with best practices varying from one industry to another, and between government and private sector projects. However, as a practitioner of Lean and living order project management, you’ll want to keep the following considerations in mind no matter what type of project you are working on. This list is adapted from suggestions by Victor Sanvido (Dec. 4, 2013): • Determine the appropriate amount of time for the whole procurement cycle: In a \$150 million design and construction project, that might be 18 months. In a smaller project, 1 month might be realistic. In manufacturing, tooling—setting the factory up with the necessary machinery—could take several months. • Set a budget for procuring the project: This budget should include the amount vendors will spend on pursuing the project. • Determine if the project’s procurement requirements allow for best value selection: If they do, ensure that the weights assigned to the evaluation criteria reflect the project’s specific requirements. • Seek the lowest total cost of ownership (TCO): Instead of focusing solely on the up-front price, focus on the TCO, which includes both direct and indirect costs associated with the product or services. • Let your team focus on what it does best: When deciding whether to procure project needs from the project team, or from outside the project team, it’s generally best to use your team’s human and financial capital for what you are best at and for things that are mission-centric. Outsource other mission-critical things at highest value. Buy everything else, especially commodities, at lowest the price. • Remove burdens from contracts that cause team members to prioritize their interests over the project’s interest: Ideally, contracts should allow for the movement of money across team member boundaries. If possible, they should also pool the team’s risk and profits, so that all are rewarded, or all fail. • Pick the right people: When deciding which companies to partner with, focus on companies that have the cultures, expertise, and capacity to deliver the project. Make sure you know which individuals within each company will be responsible for delivering the project. Be prepared to build a long-term relationship. • Learn and look to the future: Once procurement is complete, identify the products and processes you discovered during the procurement process that will offset the time spent in procurement in the future. ~Glossary • contract—According to Merriam-webster.com, “a binding agreement between two or more persons or parties.” A contract can take many forms, ranging from a submitted invoice (which can serve as a binding agreement) to 200 pages of legal language plus appendices. • cost-plus: An agreement in which the contractor or seller “is reimbursed for all direct allowable costs (materials, labor, travel) plus an additional fee to cover overhead and profit. This fee is negotiated in advance and usually involves a percentage of the total costs” (Larson and Gray 2011, 452). In small projects, this arrangement is sometimes referred to as time and materials. • DBOM (Design, Build, Operate, Maintain)—A type of partnership in which a private organization builds a facility and operates it on behalf of the public for as long as 20 years. DBOM partnerships have been used since the mid-1980s to construct and operate waste-to-energy projects that transform trash into electrical power. • DBOOM (Design, Build, Own, Operate, Maintain)—A new variation on DBOM which makes it possible for public or private organizations to finance and operate huge undertakings like infrastructure, energy, or transportation projects. • fixed-price: An agreement in which the contractor or seller “agrees to perform all work specified in the contract at a fixed price” (Larson and Gray 2011, 451). • procurement—The process of acquiring goods and services. Used to refer to a wide range of business activities. • proposal—According to Merriam-webster.com, “something (such as a plan or suggestion) that is presented to a person or group of people to consider.” Depending on the nature of your company, this “something” might consist of little more than a few notes in an email, or it might incorporate months of research and documentation, costing hundreds of thousands of dollars to produce. • request for proposal (RFP)—A “document that describes a project’s needs in a particular area and asks for proposed solutions (along with pricing, timing, and other details) from qualified vendors” (Peters 2011). • service-level agreement (SLA)—“A contract between a service provider and its internal or external customers that documents what services the provider will furnish and defines the performance standards the provider is obligated to meet” (TechTarget n.d.). An SLA is an example of a document that can be used to codify an agreement between an organization and external vendors (that is, an external contract), or between departments within an organization (that is, an internal contract). • single-sourcing—The practice of using one supplier for a particular product. • supply chain management—According to the Council of Supply Chain Management Professionals, “the planning and management of all activities involved in sourcing and procurement, conversion, and all logistics management activities.” • sustainable procurement—Procurement that emphasizes goods and services that minimize environmental impacts while also taking into account social considerations, such as eradicating poverty, reducing hazardous wastes, and protecting human rights (Kjöllerström 2008). • total cost of ownership (TCO)—All the costs associated with owning a particular asset, throughout the lifetime of the asset.
textbooks/biz/Management/Book%3A_Technical_Project_Management_in_Living_and_Geometric_Order_(Russell_Pferdehirt_and_Nelson)/1.04%3A_Procurement.txt
Leadership takes place in the living order. Management takes place in the geometric order. —John Nelson, PE – Chief Technical Officer, Global Infrastructure Asset Management Adjunct Professor, Civil & Environmental Engineering, University of Wisconsin-Madison Objectives After reading this chapter, you will be able to • List advantages of teams and strong leadership • Discuss the role of trust in building a team, and describe behaviors that help build trust • List motivators and demotivators that can affect a team’s effectiveness • Explain issues related to managing transitions on a team • Explain the role of self-organizing teams in Agile • Describe the advantages of diverse teams and provide some suggestions for managing them • Discuss the special challenges of virtual teams The Big Ideas in this Lesson • Building trust is key to creating an effective team. Reliable promising, emotional intelligence, realistic expectations, and good communication all help team members learn to rely on each other. • The most effective project managers focus on building collaborative teams, rather than teams that require constant direction from management. • Teams made up of diverse members are more creative, and better at processing information and coming up with innovative solutions. Organizations with a diverse workforce are significantly more profitable than organizations a homogeneous workforce. 5.1 Teams in a Changing World According to Jon. R. Katzenbach and Douglas K. Smith, authors of the Wisdom of Teams: Creating the High-Performance Organization, a team is a “small number of people with complementary skills who are committed to a common purpose, performance goals, and approach for which they hold themselves mutually accountable” (1993, 45). Of course, this describes an ideal team. A real team might be quite different. You have probably suffered the pain of working on a team lacking in complementary skills, with no clear common purpose, and plagued by uncommitted members who refuse to hold themselves accountable. However, as a project manager, you need to work with the team you have, not with the team you wish you had, leading your group through the uncertainty inherent in a living order project, and encouraging collaboration at every turn. Attributes of a good team leader: Most important tool: ears. Most important skills: active listening and reflection (Nelson). The most powerful sources of uncertainty in any project are the people charged with carrying it out. What’s more, because a project is, by definition, a temporary endeavor, the team that completes it is usually temporary as well, and often must come together very quickly. These facts can exacerbate leadership challenges that are not an issue in more stable situations. Some organizations maintain standing teams that tackle a variety of projects as they arise. But even in those cases, individual team members come and go. These minor changes in personnel can hugely affect the team’s overall cohesion and effectiveness. How can you make your team as effective as possible? For starters, it helps to feel good about being on a team in the first place. According to Katzenbach and Smith, most people either undervalue the power of teams or actually dislike them. They point to three sources for this skepticism about teams: “a lack of conviction that a team or teams can work better than other alternatives; personal styles, capabilities, and preferences that make teams risky or uncomfortable; and weak organizational performance ethics that discourage the conditions in which teams flourish” (1993, 14). But research shows that highly functioning teams are far more than the sum of their individual members: First, they bring together complementary skills and experiences that, by definition, exceed those of any individual on the team. This broader mix of skills and know-how enables teams to respond to multifaceted challenges like innovation, quality, and customer service. Second, in jointly developing clear goals and approaches, teams establish communications that support real-time problem solving and initiative. Teams are flexible and responsive to changing event and demands…. Third, teams provide a unique social dimension that enhances the economic and administrative aspects of work…. Both the meaning of work and the effort brought to bear upon it deepen, until team performance eventually becomes its own reward. Finally, teams have more fun. This is not a trivial point because the kind of fun they have is integral to their performance. (1993, 12) Viewed through the lens of living order, perhaps the most important thing about teams is the way they, by their very nature, encourage members to adapt to changing circumstances: Because of their collective commitment, teams are not as threatened by change as are individuals left to fend for themselves. And, because of their flexibility and willingness to enlarge their solution space, teams offer people more room for growth and change than do groups with more narrowly defined task assignments associated with hierarchical job assignments. (1993, 13) A Word on Risk Joining a team—that is, fully committing yourself to a group of people with a shared goal—is always a risk. But risk can bring rewards for those willing to take a chance. Jon R. Katzenbach and Douglas K. Smith, explain that, in their studies of scores of teams, they discovered an underlying pattern: real teams do not emerge unless the individuals on them take risks involving conflict, trust, interdependence, and hard work. Of the risks required, the most formidable involve building the trust and interdependence necessary to move from individual accountability to mutual accountability. People on real teams must trust and depend on one another— not totally or forever— but certainly with respect to the team’s purpose, performance goals, and approach. For most of us such trust and interdependence do not come easily; it must be earned and demonstrated repeatedly if it is to change behavior. (Katzenbach and Smith 1993) 5.2 Behaviors that Build Trust Years of psychological research has demonstrated the importance of trust in building effective teams (Breuer, Hüffmeier and Hertel 2016). Because teams often need to come together in a hurry, building trust quickly among members is essential. A team of strangers who are brought together to complete a task in three months can’t draw on the wellspring of interpersonal knowledge and loyalty that might exist among people who have worked side-by-side for years. So as a team leader, you need to focus on establishing trusting relationships at the outset. Your ultimate goal is to encourage an overall sense of psychological safety, which is “a shared belief held by members of a team that the team is safe for interpersonal risk taking.” Teams that do their work under the umbrella of psychological safety are more effective, in part because they are willing to take the risks required to learn and innovate (Edmondson 1999). Let’s look at a few important traits, techniques, and behaviors that can help you build trust and a sense of psychological safety. Who is the “Right” Person for Your Project? As Laufer et al. explain in their book Becoming a Project Leader, “When it comes to projects, one thing is very clear: ‘right’ does not mean ‘stars.’ Indeed, one of the primary reasons for project ‘dream teams’ to fail is ‘signing too many all-stars.’” More important than an all-star is a project team member fully committed to the project goals. Chuck Athas was one such team member. He worked for Frank Snow, the Ground System and Flight Operations Manager at NASA’s Goddard Space Flight Center. Officially listed as the project scheduler and planner, Chuck was eager to help Frank once the schedule was completed and needed less attention. “Anything that needed to be done, and he didn’t care what it was, he would attack with the same gusto and unflappable drive to succeed,” says Frank. “Whatever it took to get the job done, Chuck would do. Was there anything he couldn’t make happen? Probably something. But with Chuck on the team I felt like I could ask for Cleveland, and the next day he would show up with the deed” (Snow 2003). Chuck demonstrated a lack of ego that most all-stars don’t have. His can-do attitude is the antidote to the not-my-job thinking that can sometimes cause team cohesiveness and project completion to falter. His adherence to the project goals over his own goals made him an ideal team member (Laufer, et al. 2018). Reliable Promising Nothing erodes trust like a broken promise. We all know this. As Michelle Gielan explains in a blog post for Psychology Today: When we don’t keep a promise to someone, it communicates to that person that we don’t value him or her. We have chosen to put something else ahead of our commitment. Even when we break small promises, others learn that they cannot count on us. Tiny fissures develop in our relationships marked by broken promises. (2010) Unfortunately, in fast-moving, highly technical projects, breaking ordinary, everyday promises is inevitable. In living order, it’s just not possible to foresee every eventuality, so the task at the top of today’s To Do list, the one you promised to complete before lunch, might get swept aside in the flood of new tasks associated with a sudden crisis. Keeping Track of Reliable Promises It’s helpful to keep a reliable promise log in a spreadsheet. On a big project, you might have 15-20 reliable promises logged at any one time. At every meeting, open the log and go through the reliable promises to find out which were met and which weren’t. Record a success rate in the log for each person. If you craft the promises correctly, this is an extremely helpful metric on team functionality and performance. A success rate of 70% is marginal. The mid- to high ’80s is good. The low ’90s is very good. A success rate above that means someone’s not telling the truth. That’s why it’s important to distinguish between an ordinary promise, and a reliable promise. In Lean terminology, a reliable promise is an official commitment to complete a task by an agreed-upon time. In order to make a reliable promise, you need to have: • Authority: You are responsible and accountable for the task. • Competence: You have the knowledge to properly assess the situation, or you have the ability to engage someone who can advise you. • Capacity: You have a thorough understanding of your current commitments and are saying “Yes” because you are confident that you can take on an additional task, not because you want to please the team or the team leader. • Honesty: You sincerely commit to complete the task, with the understanding that if you fail, other people on your team will be unable to complete their work. • Willingness to correct: After making a reliable promise, if you miss the completion date, then you must immediately inform your team and explain how you plan to resolve the situation.(Nelson, Motivators and Demotivators for Teams 2017) Not every situation calls for an official, reliable promise. John Nelson estimates that, on most projects, no more than 10 to 20 percent of promises are so important that they require a reliable promise (2019). As Hal Macomber explains in a white paper for Lean Project Consulting, you should save reliable promises for tasks that must be completed so that other work can proceed. And keep in mind that you’ll get the best results from reliable promises if they are made in a group setting, where other teammates can chime in with ideas on how to complete the task efficiently or suggest alternatives to the proposed task. Finally, remember that people tend to feel a more positive sense of commitment to a promise if they understand that they have the freedom to say no: A sincere “no” is usually better than a half-hearted “ok.” You know exactly what to do with the no—ask someone else. What do you do with a half-hearted “ok?” You can worry, or investigate, or not have time to investigate and then worry about that. Make it your practice to remove fear from promising conversations. (2010) You can read Hal Macomber’s helpful introduction to reliable promising here: “Securing Reliable Promises on Projects: A Guide to Developing a New Practice. The practice of reliable promising was developed as a way to keep Lean projects unfolding efficiently in unpredictable environments. Ultimately, reliable promises are an expression of respect for people, which, as discussed in Lesson 1, is one of the six main principles of Lean. They encourage collaboration and help build relationships among team members. In Agile, the commitments made in every Scrum are another version of reliable promises. And the sincere commitment offered by a reliable promise can be useful in any kind of project. Here are some examples of situations in which reliable promising could be effective: • For a product development project, when will an important safety test will be completed? • For a medical technology project, will a report required to seek regulatory approval be completed on time? • For an IT project, will the procurement team execute a renewal contract for the maintenance agreement before the current agreement expires? If not, the organization risks having no vendor to support an essential software component. Using Emotional Intelligence As a manager of technical projects, you might be inclined to think that, as long as you have the technical details under control, you have the whole project under control. But if you do any reading at all in the extensive literature on leadership, you’ll find that one characteristic is crucial to building trusting relationships with other people: emotional intelligence, or the ability to recognize your own feelings and the feelings of others. High emotional intelligence is the hallmark of a mature, responsible, trustworthy person. In fact, a great deal of new research suggests that skills associated with emotional intelligence—“attributes like self-restraint, persistence, and self-awareness—might actually be better predictors of a person’s life trajectory than standard academic measures” (Kahn 2013). An article in the Financial Post discusses numerous studies that have tied high emotional intelligence to success at work: A recent study, published in the Journal of Organizational Behavior, by Ernest O’Boyle Jr. at Virginia Commonwealth University, concludes that emotional intelligence is the strongest predictor of job performance. Numerous other studies have shown that high emotional intelligence boosts career success. For example, the U.S. Air Force found that the most successful recruiters scored significantly higher on the emotional intelligence competencies of empathy and self-awareness. An analysis of more than 300 top level executives from 15 global companies showed that six emotional competencies distinguished the stars from the average. In a large beverage firm, using standard methods to hire division presidents, 50% left within two years, mostly because of poor performance. When the firms started selecting based on emotional competencies, only 6% left and they performed in the top third of executive ranks. Research by the Center for Creative Leadership has found the primary cause of executive derailment involves deficits in emotional competence. (Williams 2014) According to Daniel Goleman, author of the influential book Emotional Intelligence: Why It Can Matter More Than IQ, it’s well established that “people who are emotionally adept—who know and manage their own feelings well, and who read and deal effectively with other people’s feelings—are at an advantage in any domain of life, whether romance and intimate relationships or picking up the unspoken rules that govern success in organizational politics” (1995, 36). In all areas of life, he argues, low emotional intelligence increases the chance that you will make decisions that you think are rational, but that are in fact irrational, because they are based on unrecognized emotion. And nothing erodes trust like a leader who imposes irrational decisions on a team. To keep your team working smoothly, make regular use of these important words: • I’m not sure. • What do you think? • I don’t know. • Please. • Thank you. • I was wrong. You were right. • Good job! Some people are born with high emotional intelligence. Others can cultivate it by developing qualities and skills associated with emotional intelligence, such as self-awareness, self-control, self-motivation, and relationship skills. Of course, it’s no surprise that these are also useful for anyone working on a team. Treating others the way they want to want to be treated—not how you want to be treated—is a sign of a mature leader, and something that is only possible for people who have cultivated the emotional intelligence required to understand what other people want. The following resources offer more information about emotional intelligence: Cultivating a Realistic Outlook You might have had experience with an overly negative project manager who derailed a project with constant predictions of doom and gloom. But in fact, the more common enemy of project success is too much positivity, in which natural human optimism blinds team members to reality. That’s a sure-fire way to destroy painstakingly built bridges of trust between team members. In her book Bright-Sided: How Positive Thinking is Undermining America, social critic Barbara Ehrenrich explains the downside of excessive optimism, which, she argues, is a special failing of American businesses (2009). The optimist clings to the belief that everything will turn out fine, even when the facts indicate otherwise, and so fails to prepare for reality. The optimist also has a tendency to blame the victims of unfortunate events: “If only they’d had a more positive attitude in the first place, nothing bad would have happened.” In the planning phase, an overly optimistic project manager can make it difficult for team members to voice their realistic concerns. In a widely cited article in the Harvard Business Review, psychologist Gary Klein argues that projects fail at a “spectacular rate,” in part because “too many people are reluctant to speak up about their reservations during the all-important planning phase.” To counteract this effect, Klein pioneered the idea of a troubleshooting session—which he calls a premortem—early on in a project in which people who understand the project but are concerned about its potential for failure feel free to express their thoughts. This widely used technique encourages stakeholders to look to the future and analyze the completed project as if it were already known to be a total failure: A premortem is the imaginary converse of an autopsy; the hindsight this intelligence assessment offers is prospective. In sum, tasking a team to imagine that its plan has already been implemented and failed miserably increases the ability of its members to correctly identify reasons for negative future outcomes. This is because taking a team out of the context of defending its plan and shielding it from flaws opens new perspectives from which the team can actively search for faults. Despite its original high level of confidence, a team can then candidly identify multiple explanations for failure, possibilities that were not mentioned let alone considered when the team initially proposed then developed the plan. The expected outcomes of such stress-testing are increased appreciation of the uncertainties inherent in any projection of the future and identification of markers that, if incorporated in the team’s design and monitoring framework and subsequently tracked, would give early warning that progress is not being achieved as expected. (Serrat 2012) Communicating Clearly, Sometimes Using Stories Reliable promises, emotional intelligence, and a realistic outlook are all meaningless as trust-building tools if you don’t have the skills to communicate with your team members. In his book Mastering the Leadership Role in Project Management, Alexander Laufer explains the vital importance of team communication: Because a project functions as an ad hoc temporary and evolving organization, composed of people affiliated with different organizations, communication serves as the glue that binds together all parts of the organization. When the project suffers from high uncertainty, the role played by project communication is even more crucial. (2012, 230) Unfortunately, many people think they are better communicators than they actually are. Sometimes a person will excel at one form of communication but fail at others. For instance, someone might be great at small talk before a meeting but continually confuse co-workers with poorly written emails. This is one area where getting feedback from your co-workers can be especially helpful. Another option is taking a class, or at the very least, consulting the numerous online guides to developing effective communication skills. To help you get started, here are a few quick resources for improving vital communication skills: • Making small talk—People often say they dislike small talk, but polite conversation on unimportant matters is the lubricant that keeps the social gears moving, minimizing friction, and making it possible for people to join forces on important matters. If you’re bad at small talk, then put some time into learning how to improve; you’ll get better with practice. There’s no better way to put people at ease. This article includes a few helpful tips: “An Introvert’s Guide to Small Talk: Eight Painless Tips. • Writing good emails—An ideal email is clear, brief, calm, and professional. Avoid jokes, because you can never be certain how team members (especially team members in other countries) will interpret them. A good emailer also understands the social rules that apply to email exchanges, as explained here: “The Art of the Effective Business Email.” • Talking one-on-one—Nothing beats a face-to-face conversation for building trust and encouraging an efficient exchange of ideas, as long as both participants feel comfortable. In fact, Alexander Laufer suggests using face-to-face conversation as the primary communication mode for your team (2012, 230). As a team leader, it’s your job to be aware of the many ways conversations can go awry, particularly when subordinates fear speaking their mind. This excellent introduction to the art of conversation includes tips for recognizing signs of discomfort in others: “The Art of Conversation: How to Improve Face-to-Face Communication in a Digital World.” Telling stories is an especially helpful way to share experiences with your team. Indeed, stories are “a form of communication that has been used to entertain, persuade, inspire, impart wisdom, and teach for thousands of years. This wide range of uses is due to a story’s remarkable effect on human emotion, experience, and cognition” (Kerby, DeKorver and Cantor 2018). You’ve probably experienced the way people lower their defenses when they realize they are hearing a tale about specific characters, with an uncertain outcome, rather than a simple recitation of events, or worse, a lecture. Master storytellers seem to do it effortlessly, but in fact they usually shape their stories around the same basic template. Holly Walter Kerby, executive director of Fusion Science Theater, and a long-time science educator, describes the essential story elements as follows: • A main character your audience can identify with—Include enough details to allow your audience to feel a connection with the main character, and don’t be afraid to make yourself the protagonist of your own stories. • A specific challenge—Set up the ending of the story by describing a problem encountered by the main character. This will raise a question in the minds of the audience members and make them want to listen to the rest of the story to find out what happens. • Can Sam and Danielle recover from a supplier’s bankruptcy and figure out how to get three hundred light fixtures delivered to a new office building in time for the grand opening? • Can Hala, a mere intern, prevent seasoned contractors from using an inferior grade of concrete?[1] • Three to five events related by cause and effect—The events should build on each other, and show the characters learning something along the way. Describe the events in a way that helps build a sense of tension. • One or two physical details—People tend to remember specific physical details. Including one or two is a surprisingly effective way to make an entire story more memorable. • The first new vendor Sam and Danielle contacted agreed to sell them all the light fixtures they needed, but ended up sending only one fixture in a beaten-up box with the corners bashed in. • Hala, a small person, had to wear an oversized helmet and vest on the job site, which emphasized that she was younger and less experienced than the contractors. • An outcome that answers the question The outcome should be simple and easy to understand. Most importantly, it should answer the question posed at the beginning of the story. • Yes—by collaborating with a new supplier, Sam and Danielle were able to acquire the light fixtures in time for the grand opening. • No—Hala could not stop the contractors from using inferior concrete, but she did report the problem to her boss, who immediately halted construction until the concrete could be tested, and, in the end, replaced. • Satisfying Ending—Explain how the events in the story led to some kind of change in the characters’ world. • Sam and Danielle learned to focus on building relationships with reliable, financially stable vendors. • Hala learned that even an intern can safeguard a project by speaking up when she sees something wrong. Keep in mind that in some high-stakes situations, the last thing you want is more tension. In that case, you want the opposite of a story—a straightforward recitation of the facts. For example, when confronting a team member about poor work habits, or negotiating with an unhappy client, it’s best to keep everything simple. Draining the drama from a situation helps everyone stay focused on the facts, keeping resentment and other negative emotions to a minimum (Manning 2018, 64). For more on good techniques for difficult conversations, see Trevor Manning’s book Help! I need to Master Critical Conversations. [1] Thanks to Hala Nassereddine for sharing her story of her experience as an intern on a construction site in Beirut, Lebanon. The Beauty of Face-to-Face Communication As Laufer et al. point out in their book Becoming a Project Leader, “In contrast to interactions through other media that are largely sequential, face-to-face interaction makes it possible for two people to send and receive messages almost simultaneously. Furthermore, the structure of face-to-face interaction offers a valuable opportunity for interruption, repair, feedback, and learning that is virtually instantaneous. By seeing how others are responding to a verbal message even before it is complete, the speaker can alter it midstream in order to clarify it. The immediate feedback in face-to-face communication allows understanding to be checked, and interpretation to be corrected. Additionally, face-to-face communication captures the full spectrum of human interaction, allowing multiple cues to be observed simultaneously. It covers all the senses—sight, hearing, smell, taste, and touch—that provide the channels through which individuals receive information” (2018). Certainly, in today’s world of project management, in which distributed digital teams are becoming common practice, it may be impossible to sit down in the same room with all team members. But as much as possible, project managers should push for using technology that allows a fuller communication environment—one in which interactions are not just isolated to text. For more, see “The Place of Face-to-Face Communication in Distributed Work” by Bonnie A. Nardi and Steve Whittaker.” 5.3 Team Motivators and Demotivators To build believable performances, actors start by figuring out their characters’ motivations—their reasons for doing what they do. As a team leader, you can use the same line of thinking to better understand your team members. Start by asking this question: Why do your team members do what they do? Most people work because they have to, of course. But their contributions to a team are motivated by issues that go way beyond the economic pressures of holding onto a job. In their book The Progress Principle: Using Small Wins to Ignite Joy, Engagement, and Creativity at Work, Teresa Amabile and Steven Kramer argue that the most important motivator for any team is making meaningful daily progress toward an important goal. In their study of 12,000 daily journal entries from team members in a variety of organizations and industries, they found that a sense of accomplishment does more to encourage teamwork, on-the-job happiness, and creativity than anything else. “Even when progress happens in small steps,” the researchers explain, “a person’s sense of steady forward movement toward an important goal can make all the difference between a great day and a terrible one” (2011, 77). According to Amabile and Kramer, the best managers focus on facilitating progress by removing roadblocks and freeing people up to focus on work that matters: When you do what it takes to facilitate progress in work people care about, managing them—and managing the organization—becomes much more straightforward. You don’t need to parse people’s psyches or tinker with their incentives, because helping them succeed at making a difference virtually guarantees good inner work life and strong performance. It’s more cost-effective than relying on massive incentives, too. When you don’t manage for progress, no amount of emotional intelligence or incentive planning will save the day (2011, 10). As you might expect, setbacks on a project can have the opposite effect, draining ambition and creativity from a team that, only days before, was charging full steam ahead toward its goal. But setbacks can be counterbalanced by even small wins—“seemingly minor progress events”—which have a surprising power to lift a team’s spirits, making them eager to get back to work the next day (2011, 80). You’ve probably experienced the pleasure that comes from checking at least one task off your to-do list. Even completing a small task can generate a sense of forward momentum that can propel a team toward larger achievements. Amabile and Kramer’s book is a great resource for team managers looking to improve their motivational abilities. If you don’t have time to read the whole book, they summarize their research and advice in this Harvard Business Review article: https://hbr.org/2011/05/the-power-of-small-wins. Through years of practical experience as an executive, consultant, project engineer, and project manager, John Nelson has gained a finely honed understanding of how to manage teams. According to Nelson, the following are essential for motivators for any team: • A sense of purpose—Individually, and as a whole, a team needs an overarching sense of purpose and meaning. This sense of purpose should go beyond each individual’s project duties. On the macro level, the sense of purpose should align with the organization’s strategy. But it should also align, at least sometimes, with each individual’s career and personal goals. • Clear performance metrics—How will the team and its individual members be evaluated? What does success look like? You need to be clear about this, but you don’t have to be formulaic. Evaluations can be as subjective as rating a dozen characteristics as good/not-good, or on a score of 1-5. • Assigning the right tasks to the right people—People aren’t commodities. They aren’t interchangeable, like a router or a hand saw. They are good at specific things. Whenever possible, avoid assigning people to project tasks based on capacity—that is, how much free time they have—and instead try to assign tasks that align with each individual’s goals and interests. • Encouraging individual achievement—Most people have long-term aspirations, and sometimes even formalized professional development plans. As team leader, you should be on the lookout for ways to nudge team members toward these goals. It’s not your job to ensure that they fully achieve their personal goals, but you should try to allow for at least a little forward movement. • Sailboat rules communication, in which no one takes offense for clear direction—On a sailboat, once the sail goes up, you need to be ready to take direction from the captain, who is responsible for the welfare of all on board, and not take offense if he seems critical or unfriendly. In other words, you can’t take things personally. Likewise, team members need to set their egos aside and let perceived slights go for the sake of the team. When you start a big project, explain that you are assuming sailboat rules communication. That means that, in a meeting, no one has the privilege of taking anything personally. • Mentorship—Team members need to be able to talk things over with more experienced people. Encourage your team to seek out mentors. They don’t necessarily have to be part of the project. • Consistency and follow-through—Team morale falls off when inconsistency is tolerated or when numerous initiatives are started and then abandoned. Encourage a team environment in which everyone does what they commit to do, without leaving loose ends hanging. Be on the lookout for gaps in a project, where things are simply not getting done. (Nelson 2017) Nelson also recommends avoiding the following demotivators, which can sap the life out of any team: • Unrealistic or unarticulated expectations—Nothing discourages people like the feeling that they can’t succeed no matter how hard they try. Beware of managers who initiate an impossible project, knowing full well that it cannot be accomplished under the established criteria. Such managers think that, by setting unrealistic expectations, they’ll get the most out of their people, because they’ll strive hard to meet the goal. In fact, this approach has the opposite effect—it drains people of enthusiasm for their work and raises suspicions that another agenda, to which they are not privy, is driving the project. Once that happens, team members will give up trying to do a good job. • Ineffective or absent accountability—Individual team members pay very close attention to how their leader handles the issue of accountability. If members sense little or no reason to stay on course, they’ll often slack off. As often as possible, stop and ask your team two essential question: 1) How are we doing relative to the metrics? 2) How do we compare to what we said we were going to do? If the answers are encouraging, that’s great. But if not, you need to ask this question: What are we going to do to get back on track? • Lack of discipline—An undisciplined team fails to follow through on its own rules. Members show up late for meetings, fail to submit reports on time, and generally ignore agreed-upon standards. This kind of lackadaisical attitude fosters poor attention to detail, and a general sense of shoddiness. As a team leader, you can encourage discipline by setting a good example, showing up bright and early every day, and following the team rules. Make sure to solicit input from team members on those rules, so everyone feels committed to them at the outset. • Anti-team behavior—Self-centered, aggressive bullies can destroy a team in no time, making it impossible for less confrontational members to contribute meaningfully. Overly passive behavior can also be destructive because it makes people think the passive team member lacks a commitment to project success. Finally, bad communication—whether incomplete or ineffective—is a hallmark of any poorly functioning team. (Nelson 2017) The Best Reward Isn’t Always What You Think In his book, Drive, Daniel Pink digs into the question of how to have a meaningful, purpose-driven work life. For a quick summary of his often surprising ideas, see this delightful, eleven-minute animated lecture: “Autonomy, Mastery, Purpose: The Science of What Motivates Us, Animated.” Among other things, Pink explains that cash rewards aren’t always the motivators we think they are. For simple, straight-forward tasks, a large reward does indeed encourage better performance. But for anything involving conceptual, creative thinking, rewards have the opposite effect: the higher the reward, the poorer the performance. This has been replicated time and time again by researchers in the fields of psychology, economics, and sociology. It turns out the best way to nurture engaged team members is to create an environment that allows for autonomy, mastery, and a sense of purpose (Pink 2009). One form of motivation—uncontrolled external influences—can have positive or negative effects. For example, in 2017, Hurricanes Harvey and Irma inflicted enormous damage in Texas and Florida. That had the effect of energizing people to jump in and help out, creating a nationwide sense of urgency. By contrast, the catastrophic damage inflicted on Puerto Rico by Hurricane Maria, and the U.S. government’s slow response, generated a sense of outrage and despair. One possible reason for this difference is that, on the mainland, people could take action on their own, arriving in Florida or Texas by boat or car. Those successes encouraged other people to join the effort, creating a snowball effect. But the geographic isolation of Puerto Rico, and the complete failure of the power grid, made it impossible for the average person to just show up and help out. That, in turn, contributed to the overall sense of hopelessness. This suggests that small successes in the face of uncontrolled external influences can encourage people to band together and work even harder as a team. But when even small signs of success prove elusive, uncontrolled external influences can be overwhelming. As a technical team leader, you can help inoculate your team against the frustration of external influences by making it clear that you expect the unexpected. Condition your team to be prepared for external influences at some point throughout the project. For example, let your team know if you suspect that your project could possibly be terminated in response to changes in the market. By being upfront about the possibilities, you help defuse the kind of worried whispering that can go on in the background, as team members seek information about the things they fear. If you’re working in the public domain, you’ll inevitably have to respond to influences that might seem pointless or downright silly—long forms that must be filled out in triplicate, unhelpful training sessions, and so on. Take the time to prepare your team for these kinds of things, so they don’t become demotivated by them. 5.4 Managing Transitions High performing teams develop a rhythm. They have a way of working together that’s hard to quantify and that is more than just a series of carefully implemented techniques. Once you have the pleasure of working on a team like that, you’ll begin to recognize this rhythm in action and you’ll learn to value it. Unfortunately, you might also experience the disequilibrium that results from a change in personnel. Endless books and articles have been written on the topic of change management, with a focus on helping people deal with new roles and personalities. Your Human Resources department probably has many resources to recommend. Really, the whole discipline comes down to, as you might expect with all forms of team management, good communication and sincere efforts to build trust among team members. Here are a few resources with practical tips on dealing with issues related to team transitions: • In his book, Managing Transitions, William Bridges presents an excellent model for understanding the stages of transition people go through as they adapt to change. The first stage—Ending, Losing, and Letting Go—often involves great emotional turmoil. Then, as they move on to the second stage—the Neutral Zone—people deal with the repercussions of the first stage, perhaps by feeling resentment, anxiety, or low morale. In the third stage—the New Beginning—acceptance and renewed energy kick in, and people begin to move forward (Mind Tools n.d.). You can read more about the Transition Model here: “Bridges’ Transition Model: Guiding People Through Change.” • A single toxic personality can undermine months of team-building. This article gives some helpful tips on dealing with difficult people: “Ten Keys to Handling Unreasonable and Difficult People: 10 Strategies for Handling Aggressive or Problem Personalities.” • This article offers suggestions on how to encourage likability, and, when that doesn’t work, how to get the most out of unpleasant people: “Competent Jerks, Lovable Fools, and the Formation of Social Networks.” • A change in leadership can stir up all sorts of issues. This article suggests some ideas for dealing with change when you are the one taking command: “Five Steps New Managers Should Take To Transition Successfully From Peer To Boss.” • As you’ve probably learned from personal experience, when individual members are enduring personal or professional stress, their feelings can affect the entire group. And when a team member experiences some kind of overwhelming trauma, shock waves can reverberate through the whole group in ways you might not expect. This article explains how an individual’s experience of stress and trauma can affect a workplace, and provides some tips for managing the emotions associated with traumatic events: “Trauma and How It Can Adversely Affect the Workplace.” 5.5 Self-Organizing Agile Teams Agile software development was founded as a way to help team members work together more efficiently and companionably. In fact, three of the twelve founding principles of the methodology focus on building better teams: • The most efficient and effective method of conveying information to and within a development team is face-to-face conversation. • The best architectures, requirements, and designs emerge from self-organizing teams. • At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly. (Beedle et al. 2001) The term “self-organizing teams” is especially important to Agile. Nitin Mittal, writing for Scrum Alliance, describes a self-organizing team as a “group of motivated individuals, who work together toward a goal, have the ability and authority to take decisions, and readily adapt to changing demands” (2013). But that doesn’t mean Agile teams have no leaders. On the contrary, the Agile development process relies on the team leader (known as the ScrumMaster in Scrum) to guide the team, ideally by achieving “a subtle balance between command and influence” (Cohn 2010). Sometimes that means moving problematic team members to new roles, where they can be more effective, or possibly adding a new team member who has the right personality to interact with the problematic team member. In a blog for Mountain Goat Software, Mike Cohn puts it like this: There is more to leading a self-organizing team than buying pizza and getting out of the way. Leaders influence teams in subtle and indirect ways. It is impossible for a leader to accurately predict how a team will respond to a change, whether that change is a different team composition, new standards of performance, a vicarious selection system, or so on. Leaders do not have all the answers. What they do have is the ability to agitate teams (and the organization itself) toward becoming more agile. (2010) 5.6 The Power of Diversity The rationale for putting together a team is to combine different people, personalities, and perspectives to solve a problem. Difference is the whole point. Diverse teams are more effective than homogenous teams because they are better at processing information and using it to come up with new ideas. According to David Rock and Heidi Grant, diverse teams tend to focus more on facts, process those facts more carefully, and are more innovative (2016). What’s more, researchers investigating creativity and innovation have consistently demonstrated “the value of exposing individuals to experiences with multiple perspectives and worldviews. It is the combination of these various perspectives in novel ways that results in new ideas ‘popping up.’ Creative ‘aha’ moments do not happen by themselves” (Viki 2016). In his book: The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies, Scott Page puts it like this: As individuals we can accomplish only so much. We’re limited in our abilities. Our heads contain only so many neurons and axons. Collectively, we face no such constraint. We possess incredible capacity to think differently. These differences can provide the seeds of innovation, progress, and understanding. (2007, xxx) Despite these widely documented advantages of diverse teams, people often approach a diverse team with trepidation. Indeed, bridging differences can be a challenge, especially if some team members feel threatened by ideas and perspectives that feel foreign to them. But diversity can result in conflict, even when everyone on the team only wants the best for others. This is especially true on teams made up of people from different countries. Such teams are vulnerable to cultural misunderstandings that can transform minor differences of opinion into major conflicts. Cultural differences can also make it hard for team members to trust each other, because different cultures have different ways of demonstrating respect and trust. In her book The Culture Map: Breaking Through the Invisible Boundaries of Global Business, Erin Meyer describes negotiations between people from two companies, one American and one Brazilian. The first round of negotiations took place in Jacksonville, Mississippi, with the American hosts taking care stick to the agenda, so as to avoid wasting any time: At the end of the two days, the American team felt great about all they had accomplished. The discussions, they believed, were efficient and productive. The short lunches and tight scheduling signified respect for the time the Brazilians invested in preparing for the negotiations and traveling to an out-of-the-way location. The Brazilians, on the other hand, were less upbeat and felt the meetings had not gone as well as hoped. (2014, 164) As it turned out, the Brazilians felt that the efficient, organized American approach left them no time to get to know their potential new business partners. During the next round of negotiations, in Brazil, the South American hosts left time for long lunches and dinners that “stretched into the late evening,” lots of good food and conversation. But this “socializing marathon” made the Americans uncomfortable because they thought the Brazilians weren’t taking the negotiations seriously. In fact the opposite was true—the Brazilians were attempting to show respect for the Americans by attempting to get to know them so as to develop “personal connection and trust” (2014, 163-165). Decades of psychology research has established that the best way to convince someone to adopt a new behavior is to convince him that other people have already adopted that behavior. So if you want individual team members to start showing up on time for meetings, for example, you can start by pointing out that the majority of team members do show up on time. As Leon Neyfakh writes in an article about how to change the way people do things: “a culture of respect and kindness isn’t necessarily made up of angels—just people who have come to believe that that’s what everyone else thinks is the right way to act” (Neyfakh 2012). When they go unrecognized, cross-cultural misunderstandings like this can cause a host of ill-feelings. The first step toward preventing these misunderstandings is self-knowledge. What are your cultural biases, and how do they affect what you expect of other people? To find out, take this helpful quiz based on Erin Meyer’s research on cross-cultural literary: “What’s Your Cultural Profile? When thinking about culture, keep in mind that different generations have different cultures, too. Behavior that might feel perfectly acceptable to a twenty-four-year-old (texting during a meeting, wearing casual clothes to work) are often frowned on by older workers. Like cross-cultural differences, generational traits can cause unexpected conflicts on a team. This can be exacerbated if older team members feel threatened by younger workers, perhaps because younger workers are better at mastering new technology. Meanwhile, because of their lack of experience, younger workers might lack the ability to synthesize new information about a project. Your attempts to manage a multi-generational team can really go off the rails if you make the mistake of confusing “character issues like immaturity, laziness, or intractability with generational traits” (Wall Street Journal n.d.). This helpful guide suggests some ways to bridge the generation gap: “How to Manage Different Generations. Teams also have their own cultures, and sometimes you’ll have to navigate widely-diverging cultures on multiple teams. Take the time to get to know your team’s set of norms and expectations, especially if you’re joining a well-established group. After a little bit of observation, you might conclude that your team’s culture is preventing it from achieving its goals, in which case, if you happen to be the team leader, you’ll need to lead the team in a new direction. Personality Power Even among people from similar backgrounds, differences in personality can invigorate a team, injecting fresh perspectives and new ideas. A team of diverse personality types can be a challenge to manage, but such a team generates richer input on the project’s progress, increasing the odds of project success. For more on teams with diverse personalities, see this article from the American Society of Mechanical Engineers: “More Diverse Personalities Mean More Successful Teams.” For tips on managing a truly toxic individual, see this Harvard Business Review article: “How To Manage a Toxic Employee.” 5.7 Virtual Teams: A Special Challenge Managing a team of people who work side-by-side in the same office is difficult enough. But what about managing a virtual team—that is, a team whose members are dispersed at multiple geographical locations? In the worldwide marketplace, such teams are essential. Deborah L. Duarte and Nancy Tennant Snyder explain the trend in their helpful workbook, Mastering Virtual Teams: Understanding how to work in or lead a virtual team is now a fundamental requirement for people in many organizations…. The fact is that leading a virtual team is not like leading a traditional team. People who lead and work on virtual teams need to have special skills, including an understanding of human dynamics and performance without the benefit of normal social cues, knowledge of how to manage across functional areas and national cultures, skill in managing their careers and others without the benefit of face-to-face interactions, and the ability to use leverage and electronic communication technology as their primary means of communicating and collaborating. (Duarte and Tennant Snyder 2006, 4) Names Matter People like to hear their names used in conversation because it suggests that you are trying to get to know them and to address their concerns. But names can be tricky when you are working with people from cultures other than your own. Use this site to learn how to pronounce names from languages you don’t speak: PronounceNames.com. For some tips on using and remembering names, see this article: “The Power of Using a Name.” When properly managed, collaboration over large distances can generate serious advantages. For one thing, the diversity of team members “exposes members to heterogeneous sources of work experience, feedback, and networking opportunities.” At the same time, the team’s diversity enhances the “overall problem-solving capacity of the group by bringing more vantage points to bear on a particular project” (Siebdrat, Hoegel and Ernst 2009, 65). Often, engaging with stakeholders via email allows for more intimacy and understanding than face-to-face conversations, which, depending on the personality types involved, can sometimes be awkward or ineffective. However, research consistently underscores the difficulties in getting a dispersed team to work effectively. In a widely cited study of 70 virtual teams, Vijay Govindarajan and Anil K. Gupta found that “only 18% considered their performance ‘highly successful’ and the remaining 82% fell short of their intended goals. In fact, fully one-third of the teams … rated their performance as largely unsuccessful” (2001). Furthermore, research has consistently shown that virtual team members are “overwhelmingly unsatisfied” with the technology available for virtual communication and do not view it “as an adequate substitute for face-to-face communication” (Purvanova 2014). Given these challenges, what’s a virtual team manager to do? It helps to be realistic about the barriers to collaboration that arise when your team is scattered around the office park or around the globe. The Perils of Virtual Distance Physical distance—the actual space between team members—can impose all sorts of difficulties. According to Frank Siebdrat, Martin Hoegl, and Holger Ernst, most studies have shown that teams who are located in the same space, where members can build personal, collaborative relationships with one another, are usually more effective than teams that are dispersed across multiple geographical locations. Potential issues include difficulties in communication and coordination, reduced trust, and an increased inability to establish a common ground…. Distance also brings with it other issues, such as team members having to negotiate multiple time zones and requiring them to reorganize their work days to accommodate others’ schedules. In such situations, frustration and confusion can ensue, especially if coworkers are regularly unavailable for discussion or clarification of task-related issues. (Siebdrat, Hoegel and Ernst 2009, 64) Even dispersing teams on multiple floors of the same building can decrease the team’s overall effectiveness, in part because team members “underestimate the barriers to collaboration deriving from, for instance, having to climb a flight of stairs to meet a teammate face-to-face.” Team members end up behaving as if they were scattered across the globe. As one team leader at a software company noted, teams spread out within the same building tend to “use electronic communication technologies such as e-mail, telephone, and voicemail just as much as globally dispersed teams do” (Siebdrat, Hoegel and Ernst 2009, 64). Communication options like video conferences, text messages, and email can do wonders to bridge the gap. But you do need to make sure your communication technology is working seamlessly. Studies show that operational glitches (such as failed Skype connections or thoughtlessly worded emails) can contribute to a pernicious sense of distance between team members. Karen Sobel-Lojeski and Richard Reilly coined the term virtual distance to refer to the “psychological distance created between people by an over-reliance on electronic communications” (2008, xxii). Generally speaking, it is tough to build a team solely through electronic communication. That’s why it’s helpful to meet face-to-face occasionally. A visit from a project manager once a year or once a quarter can do wonders to nurture relationships among all team members and keep everyone engaged and focused on project success. In their book Uniting the Virtual Workforce, Sobel-Lojeski and Reilly document some “staggering effects” of virtual distance: • 50% decline in project success (on-time, on-budget delivery) • 90% drop in innovation effectiveness • 80% plummet in work satisfaction • 83% fall off in trust • 65% decrease in role and goal clarity • 50% decline in leader effectiveness (2008, xxii) The Special Role of Trust on a Virtual Team So what’s the secret to making virtual teams work for you? We’ve already discussed the importance of building trust on any team. But on virtual teams, building trust is a special concern. Erin Meyer describes the situation like this: “Trust takes on a whole new meaning in virtual teams. When you meet your workmates by the water cooler or photocopier every day, you know instinctively who you can and cannot trust. In a geographically distributed team, trust is measured almost exclusively in terms of reliability” (Meyer 2010). All sorts of problems can erode a sense of reliability on a virtual team, but most of them come down to a failure to communicate. Sometimes the problem is an actual, technical inability to communicate (for example, because of unreliable cell phone service at a remote factory); sometimes the problem is related to scheduling (for example, a manager in Japan being forced to hold phone meetings at midnight with colleagues in North America); and sometimes the problem is simply a failure to understand a message once it is received. Whatever the cause, communication failures have a way of eroding trust among team members as they begin to see each other as unreliable. And as illustrated in Figure 5-1, communicating clearly will lead your team members to perceive you as a reliable person, which will then encourage them to trust you. Figure 5-1: The benefits of clear communication You can learn more about Leigh Thompson’s ideas in this entertaining four-minute video: “Optimizing Virtual Teams.” Leigh Thompson, a professor at Northwestern University’s Kellogg School of Management, offers a number of practical suggestions for improving virtual team work, including the following: • Verify that your communication technology works reliably, and that team members know how to use it. • Take a few minutes before each virtual meeting to share some personal news, so that team members can get to know each other. • Use video conferencing whenever possible, so everyone can see each other. The video image can go a long way toward humanizing your counterparts in distant locales. If video conferencing is not an option, try at least to keep a picture of the person you’re talking to visible, perhaps on your computer. Studies have shown that even a thumbnail image can vastly improve your ability to reach an agreement with a remote team member. (2015) Here are a few other resources on virtual teams. You’ll notice that they all emphasize good communication and building trust among team members: 5.8 Core Considerations of Leadership Good teamwork depends, ultimately, on a leader with a clear understanding of what it means to lead. To judge by the countless books on the topic, you’d think the essential nature of leadership was widely understood. However, few people really understand the meaning of “leadership.” In his book, Leadership Theory: Cultivating Critical Perspectives, John P. Dugan examines “core considerations of leadership,” zeroing in on misunderstood terms and also false dichotomies that are nevertheless widely accepted as accurate explanations of the nature of leadership. Dugan argues that a confused understanding of these essential ideas makes becoming a leader seem like a far-off dream, which only a select few can attain (Dugan 2017). But in fact, he argues, anyone can learn how to be a better leader. Here’s what Dugan has to say about core considerations of leadership: • Born Versus Made: This is one of the most pernicious false dichotomies regarding leadership. Dugan explains, that there is even a need to address a consideration about whether leaders are born or made in this day and age is mind-numbingly frustrating. Ample empirical research illustrates that leadership is unequivocally learnable when defined according to most contemporary theoretical parameters.” • Leader Versus Leadership: People tend to conflate the terms leader and leadership, but, according to Dugan, Leader refers to an individual and is often, but not always, tied to the enactment of a particular role. This role typically flows from some form of formal or informal authority (e.g., a supervisor, teacher, coach). When not tied to a particular role, the term leader reflects individual actions within a larger group, the process of individual leader development, or individual enactments attempting to leverage movement on an issue or goal. Leadership, on the other hand, reflects a focus on collective processes of people working together toward common goals or collective leadership development efforts.” • Leader Versus Follower: The conflation of leader and leadership makes it easier to create an additional false dichotomy around the terms leader and follower,” with follower considered a lesser role. “The label of leader/follower, then, is tied solely to positional authority rather than the contributions of individuals within the organization. If we flip the example to one from social movements, I often see an interesting shift in labeling. In the Civil Rights Movement in the United States there are multiple identified leaders (e.g., Martin Luther King, Jr., Malcom X, Rosa Parks, James Baldwin) along with many followers. However, the followers are often concurrently characterized as being leaders in their own right in the process. In social movements it seems we are more willing to simultaneously extend labels of leader and follower to a person.” • Leadership Versus Management: “Also tied up in leader/leadership and leader/follower dichotomies are arguments about whether leadership and management represent the same or unique phenomena. Once again, the role of authority gets tied up in the understanding of this. Many scholars define management as bound to authority and focused on efficiency, maintenance of the status quo, and tactics for goal accomplishment. An exceptional manager keeps systems functioning through the social coordination of people and tasks. Leadership, on the other hand, is less concerned with the status quo and more attentive to issues of growth, change, and adaptation.” Emergent Leadership Traditionally, engineers tended to be rewarded primarily for their analytical skills and their ability to work single-mindedly to complete a task according to a fixed plan. But in the modern world, plans are rarely fixed, and a single-minded focus blinds you to the ever-changing currents of living order. This is especially true when multiple people come together as a team to work on a project. The old, geometric order presumes the continuation of the status quo, with humans working in a strict hierarchy, directed from above, performing their prescribed tasks like ants storing food for winter. By contrast, living order unfolds amidst change, risk-taking, collaboration, and innovation. This is like an ant colony after a gardener turns on a hose, washing away carefully constructed pathways and cached supplies with a cold gush of water, transforming order into chaos, after which the ants immediately adapt, and get to work rebuilding their colony. In such an unpredictable environment, the truly effective project manager is one who can adapt, learn, and perceive a kind of order—living order—in the chaos. At the same time, the truly effective project leader knows how to create and lead a team that is adaptable and eager to learn. ~Practical Tips • At the end of every day, summarize what you and your team accomplished: In The Progress Principle, Teresa Amabile and Steven Kramer include a detailed daily checklist to help managers identify events throughout the day that promoted progress on the team’s goals, or that contributed to setbacks (2011, 170-171). “Ironically,” they explain, “such a microscope focus on what’s happening every day is the best way to build a widespread, enduring climate of free-flowing communication, smooth coordination, and true consideration for people and their ideas. It’s the accumulation of similar events, day by day, that creates that climate” (2011, 173). Or if you prefer a less regimented approach, consider writing periodic snippets, five-minute summaries of what you and your team accomplished, and then emailing them to stakeholders. Snippets became famous as a productivity tool at Google. You can learn all about snippets here: http://blog.idonethis.com/google-snippets-internal-tool/. • Establish a clear vision of what constitutes project success, and then work hard in the early stages to overcome any hurdles: This is job one for any project team leader. Focus all your teamwork skills on this essential goal. • Build trust by establishing clear rules for communication: This is important for all teams, but especially for virtual teams spanning multiple cultures: Virtual teams need to concentrate on creating a highly defined process where team members deliver specific results in a repeated sequence. Reliability, aka trust, is thus firmly established after two or three cycles. Because of that, face-to face meetings can be limited to once a year or so. (Meyer 2010) • Take time to reassess: In an article summarizing work on teams completed by faculty at the Wharton School of the University of Pennsylvania, Jennifer S. Mueller, a Wharton professor of management, explains how to get a team back on track: While teams are hard to create, they are also hard to fix when they don’t function properly. So how does one mend a broken team? “You go back to your basics,” says Mueller. “Does the team have a clear goal? Are the right members assigned to the right task? Is the team task focused? We had a class on the ‘no-no’s of team building, and having vague, not clearly defined goals is a very, very clear no-no. Another no-no would be a leader who has difficulty taking the reins and structuring the process. Leadership in a group is very important. And third? The team goals cannot be arbitrary. The task has to be meaningful in order for people to feel good about doing it, to commit to the task. (Wharton School 2006) • Keep your team small if possible: Social psychologists have been studying the question of the ideal team size for decades. The latest research suggests that smaller is better. So for large projects, it’s sometimes helpful to divide a team into layers of sub-teams of about ten members. As Jennifer S. Mueller explains, when deciding on team size, you have to consider the type of project: Is there an optimal team size? Mueller has concluded … that it depends on the task. “If you have a group of janitors cleaning a stadium, there is no limit to that team; 30 will clean faster than five. But,” says Mueller, “if companies are dealing with coordination tasks and motivational issues, and you ask, ‘What is your team size and what is optimal?’ that correlates to a team of six” (2006). • Pick the right people: In his book Mastering the Leadership Role in Project Management, Alexander Laufer describes project managers who succeeded in part because they “selected people not only on the basis of their technical, functional, or problem-solving skills, but also on the basis of their interpersonal skills”(2012, 223-224). He emphasizes the importance of selecting the best possible members for your team: With the right people, almost anything is possible. With the wrong team, failure awaits. Thus, recruiting should be taken seriously, and considerable time should be spent finding and attracting, and at times fighting for, the right people. Even greater attention may have to be paid to the selection of the right project manager. (2012, 222) • Use a buddy system: One way to deal with large, virtual teams is to pair individuals in a specific area (design, purchasing, marketing) with a buddy in another group, company, or team. This will encourage direct contact between peers, making it more likely that they will pick up the phone to resolve issues one-on-one outside the normal team meetings or formal communications. Often these two-person teams within a team will go on to build personal relationships, especially if they get to meet face-to-face on occasion, and even better, socialize. • Use Skype or other video conference options when possible: Video conferences can do wonders to improve team dynamics and collaboration. After all, only a small percentage of communication is shared via words. The remainder is body language and other visual cues. • Bring in expert help: It’s common for a team to realize it is underperforming because of interpersonal problems among team members but then fail to do anything about perhaps because of a natural aversion to conflict. But this is when the pros in your company’s human resources department can help. If your team is struggling, all you need to do is ask. As explained in this article, you may be surprised by all the ways your human resources department can help you and your team: “6 Surprising Ways That Human Resources Can Help Your Career.” Other resources for repairing a dysfunctional team include peer mentors and communities of practice. • Consider the possibility that you are the problem: If most or all of the teams you join turn out to be dysfunctional, then it’s time to consider the possibility that you are the problem. Examine your own behavior honestly to see how you can become a better team member. Peer mentors and communities of practice can be an invaluable way to sharpen your teamwork skills. It’s also essential to understand the role you typically play in a team. This 28-question quiz is a good way to start evaluating your teamwork skills: “Teamworking Skills.” • Learn how to facilitate group interactions: Just as musicians need to study and practice their instruments, leaders need to study and practice the best ways to facilitate team interactions. Here are two helpful resources: • Do your team-building exercises: People often claim to dislike team-building exercises, but they can be essential when kicking off major projects. This is especially true for teams that do not know each other, but also for teams that have worked together before or that inhabit the same building. Your team-building efforts don’t need to be major events. In fact, the less planned they appear to be, the better. • Take time to socialize: The camaraderie generated by a few hours of socializing helps build the all-important trust needed for a team to collaborate effectively. Try to make your work hours fun, too. Of course, if you are working with teams that span multiple cultures, you need to be sensitive to the fact that what’s fun for one person might not be fun for another. But at the very least, most people enjoy a pleasant conversation about something other than work. Encourage team members to tell you stories about their lives. In the process, you’ll learn a lot about your team members and how they filter information. Sharing stories also makes work more interesting and helps nurture relationships between team members. • Have some fun: Something as simple as having your team choose a name for a project and creating a project logo can help create a sense of camaraderie. Consider encouraging friendly competitions between teams, such as ‘first to get a prototype built’ or ‘most hours run on a test cell in the week.’ If your office culture is relatively relaxed, you might want to try some of the fun ideas described in this article: “25 Ways to Have Fun at Work.” • Celebrate success: Too often teams are totally focused on the next task or deliverable. Take the time to celebrate a mid-project win. This is especially helpful with lengthy, highly complex projects. ~Summary • A team is a “small number of people with complementary skills who are committed to a common purpose, performance goals, and approach for which they hold themselves mutually accountable” (Katzenbach and Smith 1993, 45). A high-functioning team is more than the sum of its individual members. They offer complementary sets of skills and varying perspectives that make it possible to solve problems as they arise. Perhaps most importantly, teams are good at adapting to changing circumstances. • Trust is the magic ingredient that allows team members to work together effectively. Because teams often come together in a hurry, building trust quickly is essential. Several techniques, traits, and behaviors help foster trusting relationships: • Reliable promises—a specialized type of commitment pioneered in Lean—formalizes the process of agreeing to a task. A reliable promise is predicated on a team member’s honest assessment that she does indeed have the authority, competence, and capacity to make a promise, and a willingness to correct if she fails to follow through. • Emotional intelligence, or the ability to recognize your own feelings and the feelings of others, is crucial to a team’s effectiveness. Some people are born with high emotional intelligence. Others can cultivate it by developing skills associated with emotional intelligence such as self-awareness, self-control, self-motivation, and relationship skills. • An unrealistically positive attitude can destroy painstakingly built bridges of trust between team members. Especially in the planning phase, an overly optimistic project manager can make it difficult for team members to voice their realistic concerns. • Reliable promises, emotional intelligence, and a realistic outlook are only helpful if you have the skills to communicate with your team members. This is one area where getting feedback from your co-workers or taking classes can be especially helpful. • According to John Nelson, team motivators include a sense of purpose; clear performance metrics; assigning the right tasks to the right people; encouraging individual achievement; sailboat rules communication, in which no one takes offense for clear direction; options for mentorship; and consistency and follow-through. Team demotivators include unrealistic or unarticulated expectations; ineffective or absent accountability; a lack of discipline; and selfish, anti-team behavior. One form of motivation—uncontrolled external influences—can have positive or negative effects, depending on the nature of the team and its members’ abilities to adapt. • Even high-performing teams can be knocked off their stride by personnel transitions or other changes. The Transition Model, developed by William Bridges, describes the stages of transition people go through as they adapt to change: 1) Ending, Losing, and Letting Go; 2) the Neutral Zone; and 3) New Beginning. Many resources are available to help teams manage transitions. • In Agile, a self-organizing team is a “group of motivated individuals, who work together toward a goal, have the ability and authority to take decisions, and readily adapt to changing demands” (Mittal 2013). • Diverse teams are more effective than homogenous teams because they are better at processing information and are more resourceful at using new information to generate innovative ideas. Companies with a diverse workforce are far more successful than homogeneous organizations. • Virtual teams present special challenges due to physical distance, communication difficulties resulting from unreliable or overly complicated technology, and cross-cultural misunderstandings. For this reason, building trust is especially important on virtual teams. ~Glossary • emergent leaders—People who emerge as leaders in response to a particular set of circumstances. • emotional intelligence—The ability to recognize your own feelings and the feelings of others. • physical distance—The actual space between team members. • premortem—A meeting at the beginning of a project in which team members imagine that the project has already failed and then list the plausible reasons for its failure. • reliable promise—A commitment to complete a task by an agreed-upon time. In order to make a reliable promise, you need to have the authority to make the promise and the competence to fulfill the promise. You also need to be honest and sincere in your commitment and be willing to correct the situation if you fail to keep the promise. • self-organizing team—As defined in Agile, a “group of motivated individuals, who work together toward a goal, have the ability and authority to take decisions, and readily adapt to changing demands” (Mittal 2013). • team—A “small number of people with complementary skills who are committed to a common purpose, performance goals, and approach for which they hold themselves mutually accountable” (Katzenbach and Smith 1993, 45). • virtual distance—The “psychological distance created between people by an over-reliance on electronic communications” (Lojeski and Reilly 2008, xxii).
textbooks/biz/Management/Book%3A_Technical_Project_Management_in_Living_and_Geometric_Order_(Russell_Pferdehirt_and_Nelson)/1.05%3A_Team_Formation_Team_Management_and_Project_Leadership.txt
In preparing for battle, I have always found that plans are useless but planning is indispensable. –Dwight D. Eisenhower, President of the United States (1953-1961), Supreme Commander of the Allied Forces in Europe (1943-1945) Objectives After reading this chapter, you will be able to • Describe the living order approach to project planning • Explain and distinguish between push and pull planning • Describe the Agile approach to project planning • Explain the relationship between sustainability and continuous improvement • Discuss issues related to contingency plans The Big Ideas in this Lesson • Uncertainty is a permanent feature of a project leader’s work. • In the living order, planning is a learning, collaborative, and adaptive exercise in which team members stand ready to alter the plan at any time to address changing conditions. • Living order planning is exemplified by pull planning, in which planners start by identifying the desired end-state, and then work backwards to plan activities that will achieve that goal. 6.1 A New Way to Think About Planning Merriam Webster’s definition of planning is “the act or process of making a plan to achieve or do something.” This suggests that the ultimate goal of planning is the plan itself. It also presumes that once a plan has been formulated, you only need to follow the plan to achieve the desired outcome. That’s fine for ordinary conversation. But when we begin to think about living order project planning, a more expansive understanding of the nature of planning emerges. In living order, planning is a process that prepares the project team to respond to events as they actually unfold. The whole point of planning is to develop strategies to mangage • Changes to scope • Schedule • Cost • Quality • Resources • Communication • Risk • Procurement • Stakeholder engagement Planning results in a plan, but the plan is not an end in itself. Rather, a plan is a strategic framework for the scheduling and execution of a project. It’s only useful if it includes the information team members require to begin moving forward. And it only remains useful if team members modify the plan as they learn the following about the project: • Key constraints such as the timeline, cost, and functional requirements. • Information on project system issues, such as workflow and milestones, which provide a broad look at the project as a whole. • Plans for periodic check-ins that allow participants and leadership to reevaluate the project and its original assumptions. Planning is Accepting Uncertainty Die-hard geometric order planners take a deterministic approach, laboring under the false notion that once everyone agrees on a plan, the plan itself determines what comes next. Indeed, it is tempting to think you can nail down every detail at the beginning of a project and then get going without looking back. But effective living order planners understand that, especially early in a project, these details are nearly always provisional and subject to change. Thus, effective living order planners stand ready to alter their plans in response to what they learn in changing conditions. They also understand that the context in which a project unfolds has varying levels of detail and variability, with potentially thousands of decisions made over the project’s life cycle. Figure 6.1 shows the expanding circles of context surrounding an individual task within a project. Each circle adds its own variability and uncertainty to a project. Figure 6.1: Each circle of context adds its own variability and uncertainty to a project. As Alexander Laufer and Gregory Howell explained in an article for Project Management Journal, a project leader’s work is founded in uncertainty (1993). Uncertainty is not an exceptional state in an otherwise predictable process of work, they argue. Instead, it is a permanent feature of modern work. What’s more, the longer the time between planning and implementation, the higher the uncertainty surrounding individual activities. Naturally, the higher the uncertainty in a project, the more difficult it is to plan, and the less effective the plans will be at articulating actions and outcomes. Finally, they emphasize that no amount of planning can eliminate the variability intrinsic to the work of a complex project. Planning for Complexity Alexander Laufer and Gregory Howell remind that the variability intrinsic to the work of a complex project cannot be eliminated by planning (1993). But what exactly is a complex project? Are all difficult projects complex? In a blog post for Team Gantt, Tera Simon explains: “A complex project isn’t necessarily a difficult project. Projects can be difficult due to reasons such as cost or performance, but this doesn’t automatically mean the project is complex. Complexity refers to projects that include ambiguity or uncertainty. They are surrounded by unpredictability” (2016). Examples of complex projects range from megaprojects like Boston’s Big Dig, to more focused undertakings like developing software for a medical technology that is not yet functional and that will be used by people in changeable healthcare settings. A truly complex project would have some of the following characteristics: • Uncertainty regarding the project’s ultimate goal. • Ambiguously defined or changing constraints. • New or insufficient technologies. • The need for new, previously untested solutions. • A large, changing cast of stakeholders from many disciplines. • An evolving context involving, for example, unpredictable climate or geological constraints, political transitions and economic upheavals. If you’re interested in a more theoretical investigation of the idea of complexity, read up on the Cynefin framework, which is a decision-making tool designed to help managers make sense of complexity. Created by David Snowden and Mary E. Boone, the Cynefin framework allows leaders “to see things from new viewpoints, assimilate complex concepts, and address real-world problems and opportunities.” It focuses on identifying the type of situation in which you are operating (simple, complicated, complex, or chaotic) and then making choices appropriate to that context. For technical project managers, the framework’s most useful insight is the distinction between complicated and complex projects. Snowden and Boone discuss this idea in an article for the Harvard Business Review: In a complicated context, at least one right answer exists. In a complex context, however, right answers can’t be ferreted out. It’s like the difference between, say, a Ferrari and the Brazilian rainforest. Ferraris are complicated machines, but an expert mechanic can take one apart and reassemble it without changing a thing. The car is static, and the whole is the sum of its parts. The rainforest, on the other hand, is in constant flux—a species becomes extinct, weather patterns change, an agricultural project reroutes a water source—and the whole is far more than the sum of its parts. This is the realm of “unknown unknowns,” and it is the domain to which much of contemporary business has shifted. (2007) Further Reading Planning is Learning In living order, it’s helpful to think of a project as a knowledge development activity . Project Planning , then, is the continuous process of incorporating new knowledge into a project plan. At the beginning of highly complex or unfamiliar projects, you may know little to nothing about how to achieve the desired outcome. By the end, you know vastly more. The more you learn, the greater your ability to fine tune the plan to achieve the desired project outcome. This means that a plan will acquire detail as you move forward. It’s important to resist the temptation to include details about factors you don’t yet completely understand because those details will almost certainly turn out to be wrong. Take care not to plan at a level of precision that exceeds your understanding of the many factors that could affect the project. When you commit to this adaptive approach to planning, you can treat project planning in a fundamentally different way. Instead of constantly asking, “How can I steer my project back onto the original plan?” you can ask, “How can I use this new knowledge to refine my plan and improve the likelihood of project success?” When you learn something, encounter a setback, or have a success, you can treat that experience as another data point to incorporate into the ongoing planning process. It’s a piece of information you didn’t have before, which you can then use to improve your overall project plan on your journey toward the project outcome. A project is a knowledge development activity. Once you have accepted the inevitable transition from a state of ignorance to a state of discovery, you will begin to question the possibility of certainty in project planning. A good rule of thumb is that if you are certain about what the future holds for your project, you’re probably wrong or actually don’t need a project plan; a simple task list may work. This is especially true in fields where work occurs in varying locations, under changing, often unpredictable circumstances. As Dora Cohenca-Zall, Alexander Laufer, and others demonstrated in their research on construction project planning, “high levels of uncertainty are the rule rather than the exception” (1994). As we discussed in Lesson 1, modern projects unfold in what Peter Vaill calls a state of “permanent whitewater” (Laufer 2012). It’s simply not possible to foresee all the problems that might arise throughout the life of a project. The ultimate goal of project planning is a well-thought-out strategy that has enough flexibility to adapt to developing circumstances. The planners themselves must continually engage in what psychologists call cognitive reframing, which means reconsidering events and facts to see them in a new way. Only then can they adapt to changing circumstances throughout the life of the project. Planning is Adaptation and Collaboration The goal of a project plan is to explain who creates what, how they create it, and for what purpose. In other words, a project plan is a tool for collaboration. The process of planning is itself a collaborative exercise that is often the first test of a team leader’s ability to build trust among members and to take advantage of the multiple perspectives offered by a diverse team. Success in planning requires all the teamwork skills and techniques at your disposal, which, as discussed in Lesson 5, includes reliable promising, emotional intelligence, a realistic outlook, and good communication skills. The more team members trust each other, the more willing they will be to take the kinds of risks required to adapt to changing circumstances. In Becoming a Project Leader, Alexander Laufer, Terry Little, Jeffrey Russell, and Bruce Maas tell stories about project managers who navigated volatile, complicated projects by fostering adaptation and collaboration (2018). These successful managers combined four essential strategies: • Evolving planning: A learning-based approach to project planning that presumes that the project team will expand their knowledge of the project as it unfolds. • Responsive agility: Quick action to solve problems as a project unfolds, combined with clear, up-to-date communication. • Proactive resilience: Challenging “the status quo, proactively and selectively” to prevent potential problems. • Collaborative teamwork: Encouraging flexible, responsive, and interactive teamwork. In Becoming a Project Leader, Alexander Laufer, Terry Little, Jeffrey Russell, and Bruce Maas explain the value of a rolling wave approach to planning in volatile situations in which it’s difficult to make firm commitments. Successful project managers develop plans “in waves as the project unfolds and information becomes more reliable.” This approach involves combining “detailed short-term plans” with 90-day, medium-term plans, and more general master plans that cover the project’s duration: This style of planning does not imply that decisions should be arbitrarily “put off until later.” Rather, it is an act of deliberately splitting off those planning aspects that can be acted upon more opportunely in the future. By applying this approach, two extreme situations are avoided. The first is the preparation of overly detailed plans too soon, which may lead to rapid obsolescence because some decisions are based on information provided by intelligent guesses rather than on reliable data. The other extreme situation is delaying the planning until all the information is complete and stable. In both cases, project effectiveness will suffer. One can make timely and firm decisions only by adopting the planning style that provides greater detail at the appropriate stage of the project. (18-19) 6.2 The Geometric Order Approach: Push The traditional, geometric-order approach to planning is founded on the idea that, with enough research and forethought, planners can foresee most eventualities. In other words, geometric order planners assume that it is possible to know everything, or nearly everything, about a project before it begins. Because planners assume they will have little need to adjust as the project unfolds, geometric order planning presumes a linear progression of sequential, predictable activities. Each participant has a specific place in a hierarchical, sequential order. Geometric order planning works well for straight-forward, predictable activities that are easy to repeat, such as laying new sewer pipe. It tends to focus on optimizing individual activities, with each activity presumed to occur as scheduled. The problem with this way of thinking is that it leads planners to disregard the uncertainty associated with activities that are dependent on each other—for example, suppose you are designing a product for a foreign market and a trade war breaks out, placing a large tariff on your product, making it much more expensive. In that case you would need to rethink your product, its potential markets, and perhaps where it is manufactured. The term push plan is used to describe a project plan founded on an assumption of geometric order principles. Once the plan is complete, it’s assumed the work will unfold accordingly, resulting in a predetermined project outcome. (See Figure 6-2.) Once a push plan is set in motion, stakeholders tend to focus on keeping the parts of the plan moving forward. The term push was first used in manufacturing to describe a system in which “production is based on a projected production plan and where information flows from management to the market, the same direction in which the materials flow” (BusinessDictionary n.d.). Figure 6-2: Once a push plan is set in motion, stakeholders tend to focus on keeping the parts of the plan moving forward. A push system is typically built around forecasts of customer demand, which may or may not be right. However, once a push plan is set in motion, the actual demand for the final product is of less concern during production than the need to keep the parts of the plan moving forward. A push plan can be appropriate where the project type and the activities to be performed are well known and very similar to previous projects. It is also appropriate when producing a commodity for a general audience. In construction and manufacturing, the ultimate goal of push planning is to produce a product for the least possible cost. In a push-plan project, subcontractors focus on completing their work on time and on budget, so they can call their work finished and move on to another project. It is built around forecasts of labor availability (see Figure 6-3 ), which are actually hard to predict and are often wrong. Meanwhile, managers are typically judged by their ability to meet the predefined production schedule. In this way, individual self-interest drives a push project forward, with limited regard for a project in which workflow is managed and waste prevented by collaboration and coordination. Figure 6-3: Push planning is built around forecasts of labor availability, which can be wrong. (Adapted from an image by David Thomack.) You’ll sometimes find push planning referred to as make-to-stock—the idea being that a push manufacturing system processes large batches of items at the fastest possible rate, based on forecasts of demand, and then moves them “to the next downstream process or into storage” (Plex 2017). That’s a simplistic way of thinking about push, but it’s a good way to start getting a grasp on the basic idea. To that end, here are some simplified examples of push systems: • Textbooks that are printed and shipped to a warehouse, where they await orders from bookstores that might or might not need them. The total depends in part on sales forecasts of student demand and in part on the least number of books the printer is willing to print. • Sidewalk salt that is manufactured and shipped to hardware stores in St. Louis. The same amount is shipped every year, even during an exceptionally warm winter in which the temperature never goes below freezing. For a humorous yet informative example of extremely bad push planning, see the famous chocolate-wrapping scene from I Love Lucy, in which poor Lucy and Ethel attempt to keep up with an increasingly fast conveyor belt: A YouTube element has been excluded from this version of the text. You can view it online here: pb.libretexts.org/web1/?p=76 More complicated examples of push planning can be found in every industry, including manufacturing, product development, healthcare, and construction. You can identify a push system by looking at the various processes in a particular system and identifying what triggers a particular process to begin work. In a push system, an upstream process is responsible for pushing work onto the next downstream process. For example, in a hospital, the emergency department might push a newly arrived patient downstream, to the surgery floor to await a procedure (See Figure 6-4.) If the surgery floor does not have any beds available, the patient will have to wait in the emergency department until one opens up. In this push setting, where “the transition of work is the responsibility of the upstream (i.e., prior) process,” it’s up to the emergency personnel to manage the situation, finally ensuring that the patient does indeed get a spot downstream on the surgery floor (Institute for Healthcare Improvement n.d.). Figure 6-4: Flow of patients in a push hospital setting. A push system lacks any explicit limit on the amount of work that can be in process in the system at any one time (Hopp and Spearman 2004, 142). Once work begins, it is supposed to continue, with no regard for delays due to errors, resource availability, and other problems. Thus, when problems do arise, the system slows to a crawl or stops entirely because it lacks the built-in mechanisms for evaluating and improving flow found in Lean and other living order methodologies. In software development, the classic example of push planning is the waterfall model, illustrated in Figure 6-5. In its purest form, the waterfall model conceives of software development as a set of discrete, sequential steps. It presumes a highly predictable project outcome, with little or no opportunity for adjustments as the project unfolds. Indeed, once the project reaches the testing stage, costs and other considerations make it nearly impossible to go back and alter the original plan. Figure 6-5: Waterfall model of software development. The many variations of push planning are fundamentally appealing because they presume that the world operates according to a prescribed set of rules. After all, as Isaac Newton taught us, we live in a world where the laws of physics produce predictable results. If you drop a football, you know it will hit the ground. If you throw it, you know it will travel through the air. In other words, we are wired to think that careful planning can produce predictable results. But that static, geometric order way of thinking does not adequately reflect the reality of modern project planning. We must take living order into account. Waterfall Model: Some History The Waterfall model was first introduced by Winston W. Royce, in a 1970 paper entitled “Managing the Development of Large Software Systems.” Royce described an ideal development process, in which developers engaged in detailed planning at the beginning of the project and then wrote the code to match minute specifications, producing a predictable outcome. But Royce was not recommending this as a realistic way to develop software. In fact, the sentence immediately following his waterfall diagram reads, “I believe in this concept, but the implementation described above is risky and invites failure” (Royce). Much to his dismay, his description of the overly simplistic waterfall method tore through the software development world of the 1970s and 1980s like wildfire. For an engaging telling of the history of the waterfall method, see this video by Glen Vanderburg, starting at 9:00: A YouTube element has been excluded from this version of the text. You can view it online here: pb.libretexts.org/web1/?p=76 6.3 The Living Order Approach: Pull Planning Pull planning is the practical application of the living order approach to project planning. It is collaborative, flexible, and recursive, and is especially suited for highly complex projects in which stakeholders have to adapt to new information. It presumes a group of workers who coordinate regularly, updating their plan to reflect current conditions. It focuses on producing as much as can actually be completed and no more. The ultimate goal of pull planning is creating the best possible value. The thinking behind pull originated in 1948 with Taiichi Ohno, known as the “father of the Toyota Production System,” which in turn led to Lean and Agile. Living in post-World-War-II Japan, where food shortages were a frequent problem, Ohno drew his inspiration from American supermarkets, with their seemingly magical ability to provide whatever customers wanted, when they wanted it: Pull Thinking Comes Before Pull Scheduling In lesson 7, you’ll learn about creating pull schedules, which are typically created by a group of stakeholders who collaborate by placing multi-colored Post-it notes on a wall-sized planning board. It’s good to know how to create pull schedules, but before you can do that, you need to grasp the fundamentals of pull thinking discussed in this lesson. For most people, pull thinking is a whole new way of looking at project planning. Pull scheduling is the practical implementation of pull thinking. Once you grasp the essential concepts of pull thinking, the process of creating a pull schedule is something you can learn by doing. From the supermarket we got the idea of viewing the earlier process in a production line as a kind of store. The later process (customer) goes to the earlier process (supermarket) to acquire the needed parts (commodities) at the time and in the quantity needed. The earlier process immediately produces the quantity just taken (re-stocking the shelves). (Ohno 1988, 26) A pull system is sometimes referred to as make-to-order—suggesting that a customer places an order, at which point the entire production facility kicks into gear to create the item required to fulfill that one order. That’s a highly simplified version of pull, but it’s a helpful starting point. These two supply-chain examples illustrate this elementary version of pull: • A student orders a textbook online, a single copy of which is then printed and shipped directly to the student. • A paint store customer puts the last three containers of primer in her cart, prompting the store clerk to restock the primer bin. In reality, the concept of the “customer” means more in pull than just the end user. In a pull system, a downstream process is the customer of the prior, upstream process. Here’s what this would mean on a construction site: All work is done at the pull of the customer. So the electrician is completing her in-wall rough-in so that the drywaller, her customer, can start standing rock. The drywaller is hanging rock so that his customer, the painter, can begin work. By working backward from a project milestone … we make sure that all work is pulled into the plan. The result is that work happens at the right time, not just whenever it can. (Lemke 2016) Pull planning is a key concept in Lean, which values the seamless flow of work without the inevitable stops and starts (i.e., waste) that characterize push planning. As illustrated in Figure 6-6, pull planning eliminates unnecessary steps, saving as much as one-third of the time required to complete a similar project designed according to a traditional push plan. Figure 6-6: Pull planning eliminates waste by eliminating unnecessary steps. (Source: John Nelson) In Lean thinking, planning is a fluid, real-time process. To be an effective Lean project planner, you need to understand that living order continues to evolve. Planning is no longer about communicating and reinforcing a formal, predefined static plan to all team members. It’s about giving each team member a way of thinking about the project and a process for incorporating new knowledge into the plan, with regular options for resetting the plan as the project unfolds. 6.4 Distinguishing Between Push and Pull In pull planning, you start by identifying the desired end state of the project, which is the value you want to create. Then you work backwards to determine the most efficient (least wasteful) way to get there. This is similar to a crew of kayakers trekking to the bottom of a stretch of rapids and looking back up to formulate a plan for traversing them. After that, they can return to the beginning of the whitewater and attempt to navigate the rapids with a better sense of the challenges that lie ahead. The best way to grasp the nature of a pull system is to compare it to push. As a simple example, suppose you are planning a European vacation. If you were taking a push planning approach, you would start making a list of all the recommended sites in the countries you will be visiting. The result would be a schedule that allocates all the available time to the various destinations. Such a vacation plan is essentially a checklist of things to do or see. By contrast, a pull approach would be entirely focused on how you want to feel on the way home—that is, the value you want the trip to create. You might review the same list of possible sites and activities, but your decisions about which to include in your plan would depend on how you want to feel when the trip is over. Figure 6-7 illustrates the beginnings of a pull plan for two possible types of vacations—one that leaves you feeling rested, and one that leaves you feeling invigorated by new experiences. As with all pull plans, the secret is to start at the end. How do you want to feel on the way home? Then, as shown in Figure 6-8, you can add activities to your plan that ensure you end up feeling that way. As you can see, Post-it Notes are used in both figures. While there are many scheduling and planning programs to choose from, Post-it Notes, a very low-tech option, are used widely in pull scheduling because they are easy to move around, thus encouraging the planner to try out new ideas. Figure 6-7: In pull planning, you start with the desired end-state Figure 6-8: After you identify the desired end-state, you can plan the activities that will result in that end-state. Pull Planning in Action For a more extensive introduction to pull planning, see this one-hour video, produced by the authors of this book, which uses planning a vacation as a way to explain essential pull-planning concepts. A YouTube element has been excluded from this version of the text. You can view it online here: pb.libretexts.org/web1/?p=76 As you learn to identify push and pull at work in various environments, you’ll begin to see how organizations use one or the other, or combine both approaches to achieve their goals. The fact is, “in the real world, there are no pure push or pure pull systems” (Hopp and Spearman 2004, 143). You’ll also discover that the concepts of push and pull are used to emphasize different concerns in different contexts. For example, consider how the terms are used in the world of supply chain management, which encompasses “all the activities that must take place to get the right product into the right consumer’s hands in the right quantity and at the right time – from raw materials extraction to consumer purchase” (Mays Business School n.d.). In supply chain management, push/pull experts often speak of the boundary between a push portion of a system and the pull portion (Sehgal 2009). As in the following example, the boundary between push and pull usually arises after a product has been manufactured in a push environment, based on general forecasts of consumer demand, and warehoused until specific requests from specific customers pull the product into the marketplace: A food manufacturer may make mushroom soup that they brand in a few ways— their own label plus those of several supermarket chains. The manufacturer has a good idea of the amount of soup that they need each month. They are less sure, however, of how to package it to meet demand. As a result, they will likely choose to mass-produce the mushroom soup and inventory it in unlabeled cans until orders materialize. Then they can quickly label the cans and ship them out when customers place their orders. (McGinley 2016) But don’t let these more specialized applications of the concepts of push and pull distract you from the fundamental meaning of pull. Whatever your area of expertise, you’ll never go wrong by applying some pull thinking to a new project. What’s the desired end-state of the project? What value do you want the project to create? Most importantly, how will you collaborate with the project stakeholders to achieve it? Pull Public Speaking Matthew David Potter, a Masters in Engineering Management student at the University of Wisconsin-Madison, noted that, when working on a class presentation, he tried to focus on what would be useful for his fellow students, rather than what he wanted to include. In other words, like all good communicators, he zeroed in on the needs of his audience. Later, he realized that focusing on the audience is actually a form of pull thinking—starting at the desired end-state, and then figuring out how to achieve it. He summarizes his experience as follows: • If you start at the end (customer), identify the three key points (value) and then pull that value back through as you create the slides (value stream and flow), the end result is way tighter. Taking that approach encouraged me cut information that might be interesting to me, but that wasn’t critical to make my main points (waste). • By contrast, the geometric push approach starts with identifying all the information you want to share, followed by creating slides for each important point, and then a scramble to tie it all together and somehow edit it to keep the presentation under the prescribed time limit (pers. comm., June 15, 2018). 6.5 Pull and Agile Agile to the Rescue The many problems associated with the roll-out of the HealthCare.gov website in 2013 can be traced to an attempt to use a waterfall development model for a vast and complicated project. The project was rescued by a team of Agile developers, many of whom essentially volunteered their time to get the system up and running. You can read all about it here: “Obama’s Trauma Team: How an Unlikely Group of High-Tech Wizards Revived Obama’s Troubled HealthCare.gov Website.” Agile software development emphasizes an iterative approach to planning. Whereas the traditional, waterfall approach to software development presumes few changes to the project after the software requirements have been formulated, Agile is specifically designed to allow project participants to adapt to changing circumstances, the most important of which is often the customer’s evolving notion of the software requirements. Rather than planning the entire project at the beginning, Agile project planners focus on creating accelerated, evolving iterations of the product in one- to two-week development sprints. Unlike traditional project management, which presumes a well-defined beginning to a project, with tasks unfolding until the well-defined end, Agile project management has an iterative, circular flow. The engine that propels this flow is continuous feedback from the product owners about how well the software meets their needs. As the software begins to take shape, product owners are continually asked to make decisions about which features they value most, and which might be dispensed with in order to meet the project budget and schedule. This means that, in Agile development, the planning doesn’t end until the project is over. Predictability emerges if you give it time to emerge naturally and eludes you if you try to force it too soon. One of the gifts of Agile is that it is self-calibrating. Once you’re run a few sprints, an actual rate of progress starts to emerge, calibrated to the particular team, sponsor, technology, and requirements. (Merrill 2017) As software grows increasingly important in many types of products, it’s likely that Agile, with its cycles of fast feedback and revisions, will become more common in product development, including among large manufacturers such as John Deere. The cycles of Agile development produce working software faster, making it easier to get feedback from marketing and customers earlier in the product development life cycle. Agile engineering, as this new form of product development is called, encourages teams to learn about their product and make improvements faster than they could with traditional product development. In a blog post for FormLabs, a manufacturer of high-end 3D printers, Joris Peels writes, Learning from failure through prototypes helps companies quickly build better products. By validating assumptions and collecting data, these products are made in a more accurate, evidential way. With traditional methods, teams painstakingly make world maps and then spend months planning a possible route through this imagined world. Only then do they have a product and really know where they stand. With Agile Engineering, products emerge in the first week of product development. Teams set off and check their compass often. (2016) 6.6 Sustainability: Planning for Continuous Improvement Continuous improvement is “a method for identifying opportunities for streamlining work and reducing waste” (LeanKit n.d.). Known in Lean as kaizen (Japanese for improvement), continuous improvement is a key concept in Lean and Agile, but is a motivating force in all types of organizations. To fully incorporate continuous improvement into your organization’s philosophy, you need to build it into projects, starting with the planning phase. Indeed, the very process of planning is itself a continuous improvement activity because it involves looking to the future and thinking about how to do things better. Continuous improvement is an important concept for organizations seeking to make systematic sustainability changes, and for individual projects concerned with sustainability. This is especially true for projects unfolding over a long period of time because new sustainability technologies might become available during the course of the project. According to Bill Wallace, author of Becoming Part of the Solution: The Engineer’s Guide to Sustainable Development , continuous improvement programs devoted to amplifying a company’s sustainability efforts should include the following: Baseline assessment. The firm should determine its current environmental and societal impact. This should be done by first defining the scope of the firm’s activities and assessing the impacts of those activities against existing performance standards, norms, or other benchmarked firms…. Set objectives for improvement. Based on the results of the baseline assessment, the firm should devise a comprehensive set of objectives for improvement. The objectives should be measurable against established indicators. Schedules…should also be established…. Implementation. Once objectives and schedules are set, the firm should devise programs for implementation and allocate sufficient funding to achieve the objectives. Regular sustainable performance reports should be generated and sent to top management…. The reports should also contain an assessment of technology developments that could change current practices. Review and revision. The firm should schedule periodic reviews…to check progress against the objectives and plans and to see how program funds were spent. Based on program performance, client expectations, new benchmarks, new technologies, firmwide performance, or other variables, the objectives should be revisited and revised accordingly. (2005, 95) 6.7 A Word on Contingency Plans Beware the Planning Fallacy According to this interesting and entertaining podcast, humans are genetically wired for optimism. This makes us painfully susceptible to the planning fallacy, a cognitive bias that makes us think we can finish projects faster, and for less money, than is actually realistic: “Here’s Why All Your Projects Are Always Late — And What to Do About It (Freakonomics Podcast). In addition to creating the project plan, you need to create a contingency plan, which is a plan for addressing key possible obstacles to project success. A contingency plan defines alternate paths for the project in case various risks are realized. A contingency plan typically includes a contingency fund, which is an amount of resources set aside to cover unanticipated costs. Contingency plans and funds are necessary because even the most seasoned project planner sometimes succumbs to excessive optimism, assuming everything will go well and that all resources will be available when needed. Also, no matter how thoroughly you plan a project, you will inevitably miss at least a few small issues. Examples of issues that might necessitate the use of a contingency fund: • Inadequate initial estimates • Small items not covered in planning • Errors in initial estimates • Small deviations due to inevitable delays Note that a contingency fund is not designed to manage major deviations or scope changes. A simple and effective form of contingency planning is setting aside a contingency fund consisting of a fixed percentage of all resources (time, money, people) in addition to the amounts spelled out in the final budget. Ten percent is a typical amount, but that can vary depending on the size and type of project, as well as the type of industry. For example, this set of contingency guidelines, created by Arizona State University for campus construction projects, shows a range of contingency percentages: “Project Contingency Guidelines.” Likewise, the U.S. Department of Energy describes a fixed percentage approach to contingency planning here: “Contingency.” One of the chief difficulties of contingency planning is getting people to agree on exactly what is and is not covered by a contingency fund, and how it applies in specific circumstances. A considerable amount of research has been done on this topic, but there is still no clear consensus. For that reason, before launching a major project, you would be wise to investigate the ins and outs of contingency planning at your organization in particular, and in your industry in general. Contingency planning is closely related to risk management, which is discussed in Lesson 8. When you are working on small projects of limited complexity, you can probably assume that a fixed percentage contingency plan will cover most risks. However, for highly complex, technically challenging projects, it’s important to distinguish between generic budget planning contingencies (using a fixed percentage) and the more sophisticated modeling of risk for uncertainty. ~Practical Tips • Use the right level of detail: A project plan needs to be pitched at the right level, with just enough detail. A very high-level plan with minimal detail won’t be meaningful to all stakeholders. On the other hand, an extremely complex project plan with needless detail and endless lists of tasks can be so difficult to update that people will often simply neglect to do so. At that point, such a plan becomes worse than useless. As a rule of thumb, a project plan needs between 15-50 activities. That will help ensure that a plan is detailed enough to act on but manageable enough to keep updated. You can then use sub-plans to break down tasks down to more detail. • Be prepared to adapt your plan to reflect changing realities: When planning a project, don’t ever assume you are trying to hit a fixed target. In the vast majority of projects, the target actually moves. You need to be flexible and adapt your plan as necessary. • Plan at the appropriate level of precision: Take care not to plan at a level of precision that exceeds your understanding of the many factors that could affect the project. Doing so generates waste twice: first in the planning process, and then later in the execution stage when you find that the plan needs to be revised to reflect the reality of the situation. You can be sure your planning involves an unrealistic level of precision if, for instance, it results in an estimate like \$380,275,465.47. That level of precision implies a level of accuracy that does not exist in the real world. It’s more helpful to say that such a project is worth somewhere in the \$350- to \$400-million range. • Use scheduling technology as one of many planning tools: Use technology tools, such as project management software, that all stakeholders understand and know how to access. But don’t make the mistake of thinking that creating a schedule is the same as planning the project. A schedule is just one aspect of a project plan. • Keep your eye on success: Throughout the planning process, maintain a clear focus on the definition of project success. • Get everyone together to plan: If your team is scattered across multiple geographic locations, try to get everyone to meet in one place for at least part of the planning phase. This can go a long way toward clearing up misunderstandings caused by poorly worded emails or conference calls in which some stakeholders might dominate more than others. • Think holistically about your project plan: A good project plan touches on every element of the project. This 3.5-minute video gives a quick overview of things to think about when planning a project: “What Goes Into a Project Plan? ~Summary • In living order, a plan is not an end in itself, but rather a strategic framework for the scheduling and execution of a project. It’s helpful to think of a project as a knowledge development activity. Project planning, then, is the continuous process of incorporating new knowledge into a project plan. A project plan is a tool for collaboration, and the process of planning is itself a collaborative exercise that is often the first test of a team leader’s ability to build trust among members and to take advantage of the multiple perspectives offered by a diverse team. • Geometric order planning presumes a linear progression of sequential, predictable activities. The term push plan is used to describe a project plan founded on an assumption of geometric order principles. • Pull planning is the practical application of the living order approach to project planning. It is collaborative, flexible, and recursive, and is especially suited for highly complex projects in which stakeholders have to adapt to new information. It presumes a collaborative group of workers who coordinate regularly, updating their plan to reflect current conditions. It focuses on producing as much as can actually be completed and no more. • Agile takes a pull planning approach to software development. It is iterative and presumes constant adaptation in response to the customer’s evolving notion of the software requirements. • Continuous improvement, a key concept in Lean and Agile, is an important idea for organizations seeking to make systematic sustainability changes, and for individual projects concerned with sustainability. • In addition to creating a project plan, you need to create a contingency plan that addresses outcomes not spelled out in the project plan. ~Glossary • Agile engineering—A new form of product development that makes use of the interative cycles of fast feedback and revisions first implemented in Agile software development. It encourages teams to learn about their product and make improvements faster than they could with traditional product development. • cognitive reframing—The process of reconsidering events and facts to see them in a new way. • contingency plan—A plan for an alternative route to project success that can be implemented if an obstacle to progress arises. • contingency fund—Resources set aside to cover unanticipated costs. • plan—A strategic framework for the scheduling and execution of a project. In traditional, geometric order project planning, a plan presumes events will unfold in a predictable way, with little need to update the plan. In living order project planning, the plan is always provisional and subject to change. • planning bias—A cognitive bias that makes us think we can finish projects faster, and for less money, than is actually realistic. • project planning—In traditional, geometric order project planning, the process of formulating the plan that will guide the rest of the project. In living order project planning, “project planning” also refers to the continuous process of incorporating new knowledge into the initial project plan. • pull planning—Project planning that accounts for the unpredictable, ever-changing nature of the living order. Pull planners start at the desired end state of the project, working backwards to determine the most efficient (least wasteful) way to achieve the desired outcome. To be effective, pull planning requires a collaborative group of workers who coordinate regularly, updating their plan to reflect current conditions. • pull schedule—A schedule typically consisting of color-coded sticky notes that can be removed or repositioned as necessary. This can also be replicated in a number of different software programs. The key is to start with the end goal and then work backwards to determine the tasks required to achieve that goal. • push planning—Project planning that presumes events will unfold in a predictable, geometric order. Push planning is founded on management forecasts of customer demand, with great emphasis placed on the need to keep the parts of the plan moving forward. Managers and subcontractors focus on their individual portions of the project, with limited regard for managing workflow and preventing waste through collaboration and coordination. • supply chain management—All the “activities that must take place to get the right product into the right consumer’s hands in the right quantity and at the right time—from raw materials extraction to consumer purchase” (Mays Business School n.d.). • waterfall model—A push plan model used for software that breaks the development process into a set of discrete, sequential steps. It presumes a predictable project outcome, with little or no opportunity for adjustments as the project unfolds. Additional Resources • This one-hour video, produced by the authors of this book, uses planning a vacation as a way to explain essential pull-planning concepts: https://go.wisc.edu/livingpm. • Lean Construction Institute’s glossary, with definitions of “push” and “pull”: “Push” and “Pull” Definitions. • In this book, Peter Vaill introduces the term “permanent whitewater:” Managing as a Performing Art: New Ideas for a World of Chaotic Change (1989). • In this video, workshop participants build two houses out of plastic blocks, the first while following a traditional push plan and the second while employing the elements of pull planning: “Pull Planning Workshop: San Diego Mesa College.” You won’t be surprised to learn that the pull-planning houses were completed more quickly, with more cooperation and greater overall satisfaction among the team members.
textbooks/biz/Management/Book%3A_Technical_Project_Management_in_Living_and_Geometric_Order_(Russell_Pferdehirt_and_Nelson)/1.06%3A_Project_Planning.txt
Essentially, all models are wrong, but some are useful. – George Box, Founder of the Department of Statistics, University of Wisconsin-Madison Objectives After reading this chapter, you will be able to • Discuss issues related to moving from the planning phase of a project to the scheduling phase • Define terms related to scheduling • List the scheduling methods most closely associated with geometric and living orders • Explain concepts related to the critical path method, including potential pitfalls and techniques for schedule compression • Explain concepts and techniques related to pull scheduling • Describe ways to integrate pull thinking with the critical path method • Discuss the importance of project milestones The Big Ideas in this Lesson • A project schedule is a shared time map for successful completion of the project. Depending on what constututes “success” for the project, the schedule may include several hard deadlines and be highly constrained, or may be completely flexible and unconstrained. • Scheduling is a phase of project management that necessarily blends geometric and living order, often combining the preditability of critical path techniques with the agility offered by pull scheduling. • Because a schedule is a communication and thinking tool, the level of detail with which it is prepared and communicated should be tailored for the needs of the project and team members. • The critical path method—the consummate geometric, push tool—is essential for identifying activities that determine the expected duration of a complex project. However, an excessive focus on critical path analysis during project execution can divert needed attention and energy from pull-focused project delivery. 7.1 Crossing the Bridge from Planning to Scheduling As you learned in the previous lesson, a project plan is a high-level view of the project that roughly maps out how to achieve the project’s ultimate goals, given the available time and resources. In living order, a project plan is provisional and open to revision as you learn more about the project. A schedule is a specific, time-based map designed to help the project team get from the current state to successful project completion. You can think of a project plan as similar to a football coach’s strategy for winning a particular game, which might, for instance, include ideas for containing a highly mobile quarterback, or for double-teaming an exceptionally good wide receiver. By contrast, a schedule is all about tactics; it is similar to the specific plays a team uses to ensure that the right players are in the right places on the field at the right time. A team will know some of these plays backwards and forwards after endless practice; other plays will be the result of adaptation and inspiration as the game unfolds. In the same vein, in living order, a schedule typically requires regular adjustments to account for the changing realities of the project. Above all else, a schedule is a form of communication with everyone involved in the project. The attention of project stakeholders is a scarce resource. Therefore, you should strive to make your schedule worthy of your team’s attention. It’s important to shape a schedule to the team’s needs and strengths, and to your organization’s culture. A good role model for this type of flexibility is the jazz and classical musician Wynton Marsalis. When he is performing Mozart’s Trumpet Concerto in D with a symphony orchestra, he follows the strict rules of the classical music world. When he plays bebop at Lincoln Center, he switches to the free-form, improvisatory style of the jazz world. In the same way, as a project manager operating within living order, you need to be aware of what is and isn’t appropriate and useful for your particular project and organization throughout the life of your project. In all cases, it’s essential to include the right amount of detail—neither too much nor too little. You should start by identifying key milestone dates as hard deadlines—the most important of which is the final delivery date. Those dates are often set in advance by other stakeholders and cannot be changed. Then build a schedule that provides paths to meeting those deadlines. If, as you build the schedule, you realize that meeting those deadlines is not possible, then you may have to adjust milestones and project completion dates. Starting with the most important milestone, delivery date, and then building the schedule backwards, can help ensure that plans don’t get squeezed at the end. It is not uncommon for people to start out with a generous schedule for the first few activities and gradually get more aggressive with timing as they run up against the delivery date. The immediacy of the first activities means that people are more realistic about timings, whereas future activities get planned with more hope than knowledge. Sometimes it’s helpful to start with the deadlines you want to meet, and then create a schedule that fits those dates. This can be a useful exercise that helps you understand the scheduling challenges you face, including identifying the project milestones. It’s also a good way to figure out what parts of the project you need to reevaluate and adjust. It’s also important to tailor a schedule to match the project’s overall complexity and time requirements (for example, whether deadlines are hard or soft). Projects are not equal in terms of complexity and criticality. This means that the type of schedule that works for one project may not work for another. You can choose from a host of software possibilities for generating schedules, such as MS Project. Whichever one you use, take care not to get so lost in the details of building a schedule, and interacting with the software interface, that you lose sight of the project goals as expressed in the project plan. Always keep in mind the project’s definition of success as expressed by the stakeholders, as well as the overall plan for completing the work. A good project manager is able to cross the bridge from the generalities of a project plan to the specifics of a schedule, without losing sight of the big picture. 7.2 Choosing Your Words Making sure all stakeholders use the same terminology is crucial in all phases of project management, but it’s especially important when you are trying to get a group of diverse people to agree to a schedule. After all, a schedule only works as a form of communication if it is written in a language everyone understands. And since contract terms are often tied to schedule, a lack of common agreement on the meaning of specific terms in a schedule can have far-ranging effects. Terminology is so important that many state governments around the United States publish their own project management glossaries. As you embark on a new project, you’d be wise to find out if the organization you work for, or the vendors you will be working with, have compiled such a glossary. If such organizational resources exist, use them as a starting point for your own project glossary. Otherwise, you can always turn to the Project Management Institute’s lexicon (available here: “PMI Lexicon of Project Management Terms”) or glossaries provided online by consulting firms or other project management resources such as the following: The following definitions of scheduling-related terms are taken from a variety of sources. You’ll find links to these sources in the bibliography at the end of this lesson. • milestone: “A significant event in the project; usually completion of a major deliverable” (State of Michigan: Department of Technology, Management & Budget 2013). An important distinction is that a milestone is a zero-duration activity; e.g., “acceptance of software by client” is a milestone, preceded by many contributing activities. • activity: “An element of work performed during the course of a project. An activity normally has an expected duration, an expected cost, and expected resource requirements” (Project-Management.com 2016). Beware that some organizations subdivide activities into tasks while others use task and activity synonymously. • duration: “The amount of time to complete a specific task given other commitments, work, vacations, etc. Usually expressed as workdays or workweeks” (State of Michigan: Department of Technology, Management & Budget 2013).” • resource: “Any personnel, material, or equipment required for the performance of an activity” (Project-Management.com 2016). • cost: “An expenditure, usually of money, for the purchase of goods or services” (Law 2016). • slack: “Calculated time span during which an event has to occur within the logical and imposed constraints of the network, without affecting the total project duration” (Project-Management.com 2016). Or put more simply, slack, which is also called float, is the amount of time that a task can be delayed without causing a delay to subsequent tasks or the project’s overall completion date. A Single Source of Information for Your Project Team One growing area of project management is virtual project environments. These relatively low-cost, stand-alone environments usually include a built-in project planner, as well as issues databases, resource allocation utilities, task managers, dashboards, and so on. These virtual environments are especially useful for dispersed teams and make access to MS Project or similar software unnecessary. Most importantly, a virtual project environment serves as a single source of information for important documents like project plans, thus avoiding problems with out-of-date or incorrect information circulating among team members. To see some examples, do a web search for Smartsheet, Mavenlink, and Genius Project. 7.3 Geometric and Living: Two Ways to Schedule Scheduling is a phase of project management that necessarily blends geometric and living order. Sometimes you need to hew to a predetermined order of activities on a tightly regulated schedule; sometimes you need to allow for the flexibility required when one activity is dependent on another activity of uncertain duration and complexity. In a true geometric order situation, you’ll likely spend more time planning upfront than updating later. In living order, the opposite is true. Generally speaking, in a geometric order, push environment, 60 percent of effort might go into planning, 10 percent into scheduling, and 30 percent into updating and revising the schedule. In a living order, pull environment, those percentages shift, with 30 percent of effort devoted to planning, 10 percent to scheduling, and 60 percent to adjusting and refining the schedule. The planning and scheduling technique most closely associated with push planning and the geometric order is the critical path method (CPM), which is a “step-by-step project management technique for process planning that defines critical and non-critical tasks with the goal of preventing time-frame problems and process bottlenecks” (Rouse 2015). You can use CPM to identify impacts of proposed changes to the timing and duration of tasks. The key to CPM is identifying sequences of connected activities, known as paths (Larson and Gray 2011, 662). The critical path is defined as “the series of activities which determines the earliest completion of the project” (Project-Management.com 2016). The scheduling technique that best exemplifies living order principles is a pull schedule created collaboratively by stakeholders, typically by using multi-colored Post-it notes. Details of this type of scheduling have been codified in the Last Planner System, a proprietary production planning system, based on Lean principles, and developed by Glenn Ballard and Greg Howell. Agile also makes use of pull scheduling techniques. Although protecting the critical path is important in these types of living order scheduling, explicitly identifying and monitoring the critical path throughout the entire project may be less of a concern. We’ll explore why that’s true later in this lesson. First, let’s look at the basics of CPM. Then we’ll discuss the details of pull scheduling and consider ways to combine push and pull systems to achieve the ultimate goals of Lean: creating value and eliminating waste. 7.4 Push: The Critical Path Method (CPM) CPM is especially useful for large, complex projects where schedule interrelationships may not be readily apparent. It is “ideally suited to projects consisting of numerous activities that interact in a complex manner” (Rouse 2015). First used in industry in the late 1950’s, CPM has its roots in earlier undertakings, most notably on the Manhattan Project. CPM focuses on identifying the critical path and then closely monitoring the activities on the critical path through the entire project. For example, when developing a new machine, the electronic circuit design may be the critical path defining the time to launch. Designing the mechanical structure might also be important, but it may not dictate the overall time to completion and therefore the critical path. Creating a CPM model of a project involves these six steps: 1. Identify the project milestones or deliverables. 2. Create a list of all the activities in the project, as described in a work breakdown structure. 3. Identify the duration for each activity. 4. Construct a network diagram that identifies the dependencies between activities. 5. Calculate early-start, late-start, early-finish, and late-finish dates for each activity. 6. Identify the sequence of tasks through the project with tasks of zero float (slack). This is the critical path. Using CPM, you can identify: • The minimum total time required to complete the project—that is, the critical path. • Flexibility, or slack, in the schedule along other, non-critical paths. A key value of CPM analysis is the understanding it can reveal to the project team about the chain of activities that are likely to establish the earliest completion of the project. This understanding can help the team explore ways to reduce project duration and can help the team focus on efficient execution of time-critical activities. If you are considering pursuing certification as a Project Management Professional (PMP), you’ll definitely want to gain experience in CPM. As a technical project manager, you need to become conversant in CPM, even if you lack the technical expertise to create a full-blown CPM analysis in one of the many software products available for the job. Here are some helpful resources on the topic: Avoiding CPM Pitfalls As you explore the tools available for implementing CPM, keep in mind that CPM is the ultimate geometric order tool for project management. It can lure you into a false sense of security regarding the predictableness of a project, causing you to presume, for instance, that it is always possible to identify all project activities and their durations ahead of time, or that the dependencies between them is always clear. But in the constantly changing living order, you need to be prepared for change at all times. In some large projects, there actually may be more than one critical path, or the critical path may shift during project execution. This means you need to keep your eye on near critical activities and paths, so you can spring into action if they suddenly become more critical than your original analysis had foreseen. CPM provides a helpful model for testing the feasibility of a project’s overall schedule, and is therefore useful in the initial strategizing phase. However, it can become more of a burden than a help during execution if the project team feels compelled to follow the dictates of the CPM model too rigidly. It has limited value in guiding daily schedule decisions and on-the-job coordination. Project managers who spend too much time looking at their CPM models will miss the realities of day-to-day execution. This can enable reactive rather than proactive project management—that is, managing by looking out the rearview mirror instead of out of the windshield. A successful project manager uses CPM as a means of keeping the project on track and assigning the most reliable personnel to critical activities, all the while keeping in mind that CPM does not deliver absolute truths. In the words of Dr. George Box, the founder of the Department of Statistics at the Univeristy of Wisconsin-Madison, “Essentially, all models are wrong, but some are useful.” This is absolutely true of CPM. You need to evaluate your CPM project models regularly to ensure that they are in alignment with the stakeholders’ definition of project success. In the fast-changing projects that are becoming increasingly common in living order environments, you might have to start with a schedule for project milestones, with only a hypothesis of the overall critical path. Then, throughout the project, you might need to continually revise your concept of the critical path. For instance, when planning a large conference, your critical path may change if registrations significantly lag (or exceed) expectations, requiring you to adjust marketing efforts, logistics, and even the content of the conference. In software development, the critical path may change due to actions by competitors, changes in technology, risk mitigation efforts, scope changes and integration issues, just to name a few. Schedule Compression CPM can be very helpful when you need to compress a schedule—that is, when you need to take a schedule you have already developed and reduce it without adjusting the project’s scope. You can only compress a schedule by adjusting the schedule of activites on the critical path. Keep in mind that compressing a schedule adds cost and risk—often a lot of both. And compressing a schedule is often only achieved at the expense of the people doing the work—increasing their stress levels and overall frustration with their job. There are two key ways to compress a schedule: • fast tracking—A schedule compression technique in which “activities that would have been performed sequentially using the original schedule are performed in parallel. In other words, fast tracking a project means the activities are worked on simultaneously instead of waiting for each piece to be completed separately. But fast tracking can only be applied if the activities in question can actually be overlapped” (Monnappa 2017). For example, when building a new house, you might be able to overlap pouring the concrete for an exterior patio and shingling the roof, but you can’t overlap digging the foundation for the house and shingling a roof that has not yet been built. • crashing—This technique involves adding resources such as overtime or more equipment to speed up the schedule. Because of the costs involved in adding resources, crashing is “the technique to use when fast tracking has not saved enough time on the project schedule. With this technique, resources are added to the project for the least cost possible. Cost and schedule tradeoffs are analyzed to determine how to obtain the greatest amount of compression for the least incremental cost” (Monnappa 2017). Note that crashing is not typically effective with IT projects. The important thing to remember when attempting to compress a schedule it that you need to focus on compressing the critical path. It doesn’t do any good to speed up tasks that aren’t on the critical path. According to an early, but still useful article about CPM, you can think of the critical path as the “bottleneck route:” Only by finding ways to shorten jobs along the critical path can the overall project time be reduced; the time required to perform noncritical jobs is irrelevant from the viewpoint of total project time. The frequent (and costly) practice of “crashing” all jobs in a project in order to reduce total project time is thus unnecessary. Typically, only about 10% of the jobs in large projects are critical. (This figure will naturally vary from project to project.) Of course, if some way is found to shorten one or more of the critical jobs, then not only will the whole project time be shortened but the critical path itself may shift and some previously noncritical jobs may become critical. (Levy, Thompson and Wiest 1963) Brooks’ Law and Agile Development In his seminal book The Mythical Man-Month, Fred Brooks explains that crashing a schedule doesn’t work in software development because: 1) people need time (often a lot of time) to get up to speed on a project; 2) as you add more people to a project, you increase the amount of communication required, which reduces everyone’s productivity; and 3) software development tasks can’t be subdivided into smaller tasks the way physical tasks such as painting a house can be. His entire argument can be boiled down to one widely quoted line, known as Brooks’ Law: “Adding manpower to a late software project makes it later” (1975, 25). Dave Pagenkopf, an Application Development & Integration Director at the UW-Madison, explains how Agile software development offers an alternative to the painful realities of Brooks’ Law: Early in my career, like many software engineers, I didn’t see how Brooks Law could possibly be true. But as I began to lead software projects, I began to see first-hand the problems that come with crashing a software development schedule. One of the reasons that I prefer Agile so much is that the approach keeps options open when a project is behind schedule. To hit a date in an Agile project, you can reduce the scope (keeping in mind that you can always add more scope later). An Agile software project that is 80% completed is likely still useful. A waterfall software project that is 80% completed is likely useless. Here are a few tips to keep in mind when attempting to compress a schedule: • Engage the entire team in searching for opportunities with the largest time/cost impact. • Look for ways to increase concurrency, and for activities in which increasing assigned resources will shorten the activity’s duration. • Consider offering incentives for early completion. This is common, for example, in some highway projects, in which contractors are charged for every day that a lane is closed, or a bonus for completing the project early. This gives the contractors incentives to minimize the amount of lane closures at any given time. • Not all activities have equal value to project delivery. Some are merely “nice to have” activities. This is often true in open-ended projects, such as product development projects. Once you get to work shortening a project plan, you may be surprised by how much you can cut out without significantly affecting final deliverables. • Make schedule compression changes carefully, always keeping in mind that schedule compression can add risk. Make sure you thoroughly understand the eliminated activities to ensure you don’t miss something crucial. • Although CPM presumes a geometric order approach to planning and scheduling, it is not blind to the uncertainties that can arise in any project. A typical CPM schedule specifies the slack (or float), associated with each activity, thereby allowing leeway for activities that might run longer or take less time than expected. 7.5 Pull: Post-Its, Last Planner, and Agile Now that you are familiar with CPM, the geometric order response to the demands of scheduling, let’s focus on the living order approach, pull scheduling. A pull schedule is by its very nature a work in progress. Creating it is a collaborative process, and it must be updated regularly in response to current conditions. As you saw in Lesson 6, an initial pull schedule is often created during a structured collaborative session with key project members using color-coded Post-it notes that can be removed or repositioned as necessary. The orange notes in Figure 7-1 represent deliverables; the yellow notes represent steps required to produce the deliverables. After all stakeholders agree, a schedule like this is typically translated into a digital format, such as Microsoft Project or Microsoft Excel. Figure 7-1: Pull schedules are often created with Post-it Notes on a white board (Source: John Nelson) In a pull schedule, it is essential to define the project’s deliverables and handoffs, which, cumulatively, add up to the project’s outcome. That’s why color-coded Post-it notes are so useful; they allow you to see all the project’s deliverables at a glance. A pull schedule also makes it easy to see the steps required to produce a deliverable, and to identify when the handoff to the next phase of the project occurs. As in a relay race, where runners pass the baton from one to the other, the handoffs are crucial to a project’s success. If a runner drops the baton, it often doesn’t matter how quickly she ran her leg of the race, because the other runners will never be able to make up for the time lost in retrieving the dropped baton. The same is true in living order project management, in which the flow of work from one phase to the next is of paramount concern, and in which successful handoffs between phases can mean the difference between a project that fails or succeeds. Creating a Pull Schedule You can create a pull schedule electronically, using any number of scheduling programs. But to encourage the kind of collaborative conversations that encourage all stakeholders to become pull thinkers, it’s helpful to start by gathering all stakeholders in a room with a large white board (or an entire wall) set aside to use as the schedule work area. Working backwards from a target completion date, stakeholders place color-coded Post-it notes on the schedule to indicate when they will complete various tasks. No participant is allowed to move another participant’s Post-it note. Instead, as scheduling conflicts become apparent, stakeholders need to negotiate with each other, repositioning Post-it notes only after stakeholders agree on a solution to each scheduling problem. Because the people creating the schedule are the actual people responsible for the various activities, the process inevitably focuses on activities that are dependent on other activities. For example, passage of a key internal user test for new software would need to precede release of the software to an expanded beta test group. The end result of this kind of planning is a schedule with far greater team buy-in than can be produced with CPM alone. Post-it Note Planning The word is out about the power of Post-it notes in the world of project management. Innovators in many fields now advocate using sticky notes as an essential tool for brainstorming and stirring up creativity, as well as for scheduling and planning. Here are some resources with tips for taking advantage of these powerful pieces of paper: The step-by-step process of creating a pull schedule is hard to grasp in the abstract. To really learn how it works, you have to do it. But you can get a better sense of the steps involved in pull scheduling by watching these videos: • A quick three-minute introduction to pull planning schedules in construction: “Pull Planning: Miron Construction Co.” • A more in-depth, 30-minute discussion: “Pull Planning: Lean Construction. • Although essentially an ad for a company that sells supplies related to pull planning, this one-minute video shows one way to organize a room for pull scheduling: Pull Planning Kit: Big Room Supplies. Varieties of Pull Scheduling Pull scheduling, in the form of the Last Planner System (LPS) is essential to Lean. The goal of the LPS is “to produce predictable work flow and rapid learning in programming, design, construction, and commissioning of projects” (Lean Construction Institute n.d.). The five main elements of the LPS include: • Master Scheduling (setting milestones and strategy; identification of long lead items); • Phase “Pull” planning (specify handoffs; identify operational conflicts); • Make Work Ready Planning (look ahead planning to ensure that work is made ready for installation; re-planning as necessary); • Weekly Work Planning (commitments to perform work in a certain manner and a certain sequence); and • Learning (measuring percent of plan complete (PPC), deep dive into reasons for failure, developing and implementing lessons learned). (Lean Construction Institute) Note that these elements are similar to Agile scrum, which is not surprising given that the LPS and Agile both emerged from Lean. Also, these five elements of LPS tie back to the concept of rolling wave planning, described in Lesson 6. Schedules in the LPS focus on the last responsible moment, which is the “instant in which the cost of the delay of a decision surpasses the benefit of delay; or the moment when failing to make a decision eliminates an important alternative” (Lean Construction Institute). The last responsible moment is similar to choosing when to make an airline reservation. You want to wait long enough to know enough details to avoid costly changes and you want to take advantage of possible sale prices, but you also want to avoid cost increases and fully booked flights in the weeks just before travel. You choose the last responsible moment to book your travel using acquired knowledge and expectations about the future. In a construction site, there may be an LRM for finalizing excavation, another LRM for setting the forms, and yet another LRM for pouring the concrete. Project managers who are new to LPS scheduling find this focus on the last responsible moment to be counter-intuitive, because once we identify the critical path, our intuition tells us to move things along the critical path as fast as possible. However, this presumes that you know everything there is to know about a project at the very beginning, which of course is never the case. In fact, focusing on the critical path sometimes causes us to do things earlier than we need to, which can lead to mistakes and rework as the needs of the project become clearer. In living order, we see projects as knowledge collection experiences, and therefore strive to put off doing any task until it is absolutely necessary. The LPS forces you to ask the question “How long can I defer this until I absolutely have to do it, because something else depends on it?” When creating schedules in a Lean manufacturing environment, reducing batch sizes is an essential concept. Rather than scheduling a series of tasks to be completed once on a large batch, the small batch approach schedules many passes through the same series of tasks. This approach is more flexible and eliminates waste, ultimately increasing overall efficiency. It has been used successfully in paper mills, steel mills, and other industries (Preactor 2007). For more on small-batch scheduling, see this blog post: “Batch Scheduling in a Lean Manufacturing World. In all industries, a well-thought-out schedule—one that stakeholders can rely on—forms the basis for the formal commitments between team members that, in the world of Lean and the LPS, are known as reliable promises. As you learned in Lesson 5, a reliable promise is predicated on a team member’s honest assessment that she does indeed have the authority, competence, and capacity to make a promise, and a willingness to correct if she fails to follow through. A reliable promise identifies when a handoff will occur and the expectation that the receiver can be assured that the handoff will be complete and of the expected quality. For example, in the course of a project, stakeholders might make reliable promises regarding the completion of a required report, completion of a portion of software, or completion of a subcontractor’s work on a designated portion of a building. 7.6 The Critical Path in Living Order In any undertaking, keeping track of the sequence of activities that must be completed to ensure that a project is concluded on time is essential. Indeed, in some situations, identifying and monitoring the critical path using CPM is a contractual obligation. This is most common in governmental work, especially in construction projects for departments of transportation. But in highly fluid, living order situations, it’s important to ask exactly how much time and effort you should spend keeping track of the critical path. CPM advocates would argue your primary job as a project manager is to monitor the critical path. But some experts experienced in managing large, highly complex projects argue that focusing too much on the critical path can be counterproductive. In their insightful paper, “The Marriage of CPM and Lean Construction,” two experienced Boldt company executives—Bob Huber, scheduling manager, and Paul Reiser, vice president of production and process innovation—look at the scheduling process through the lens of Lean. True to their experience in Lean construction techniques pioneered by the Boldt company, they start by asking the essential question of any Lean enterprise: what value does it provide? Their analysis shows that a schedule provides different value to different stakeholders. For the owner, the “value received from the schedule is the ability to communicate project duration and financing needs to upstream and downstream interested parties.” The value provided to other stakeholders include “project duration, impacts to adjacent facilities, expectations for the timing of engineering deliverables, crew flow map, just-in-time delivery opportunities” (2003). For stakeholders primarily interested in project duration, the critical path is a special focus. But not everyone involved in a project requires minute details on the status of activities on the critical path. Huber and Reiser argue that, on a construction site, providing constant schedule updates using the complicated software available to manage CPM schedules wastes one of the most important resources available on any project: the attention of project stakeholders. The explosive growth in the capability and sophistication of computer-based project management software over the last few decades has not been closely matched by a parallel interest in or need for the data and analysis that they provide. This is especially true of the interests and needs of the front-line production manager on a construction site. The planning effort, as it demands time and energy from the front-line managers, has to compete with day-to-day project requirements for safety and environmental considerations, scope management, financial management, labor relations, owner relations, procurement, payroll, and documentation. In this competitive environment, the competition being that for the attention of the front-line production manager, the CPM schedule must necessarily deliver its value quickly and efficiently or it faces the distinct possibility of losing out to other persistent demands on the manager’s time and attention. Just because we can create an extremely detailed WBS-based resource loaded and leveled schedule and just because we can report its content in a mind-numbing array of diagrams, charts, and graphs doesn’t mean we should. In fact, practiced as an interactive discussion of crew flow and site coordination needs, with data captured and analyzed for alignment with project needs in real time, the CPM scheduling process can fulfill its assigned functions very efficiently. The test should always be whether the CPM schedule is delivering value and being readily consumed by the site production controllers. (Huber and Reiser 2003) John Nelson, adjunct professor of Engineering at the UW-Madison, and a veteran of many years in the construction industry, argues that an excessive focus on the critical path can derail a project’s chance of success: “Most critical path projects don’t meet their milestones. Most living order projects which ignore critical path do meet their milestones. An excessive focus on the critical path uses up too much energy; everyone is working on updating the critical path without actually getting anything done.” Still, he explains, all forms of scheduling have their benefits. “Instead of just focusing on one way of thinking about a schedule, you should take advantage of all the scheduling techniques available: critical path, push, pull, and so on. If you use all these techniques to stress-test your conceptual understanding of the project, then you’re going to have a higher probability of success. And always keep in mind that a schedule conceived in one situation may have to be thrown out if externalities intervene. That’s the nature of living order. This is a special concern with CPM. If you try to manage to a critical path that was conceived under different circumstances, you have a lower probability of meeting your goals.” 7.7 Focus on Milestones One way to avoid getting lost in a sea of details is to focus on your project’s milestones, which can serve as a high-level guide. You can use pull planning to identify your project’s milestones, and then use critical path to figure out how to hit those milestones. It gives a reality test to whether your milestones are in fact achievable. Then you’re off and running, in living order. In an excellent blog post on the usefulness of milestones, Elizabeth Harrin explains that milestones should be used “as a way of showing forward movement and progress and also show people what is going on, even if they don’t have a detailed knowledge of the tasks involved to get there. In that respect, they are very useful for stakeholder communication and setting expectations” (Harrin 2017). You can use milestones, she explains, to track your progress, focusing on • The start of significant phases of work • The end of significant phases of work • To mark the deadline for something • To show when an important decision is being made. (Harrin 2017) Milestones are especially useful as a form of communication on the health of a project. A version of a project schedule that consists only of milestones allows stakeholders to get a quick sense of where things stand. As you’ll learn in Lesson 11, you’ll also want to report on milestones in the project’s dashboard, which should serve as an at-a-glance update for the project. ~Practical Tips • Make sure you understand the difference between a plan and a schedule: The relationship between a plan and a schedule is similar to the relationship between a plan for a trip, which spells out general goals, and the trip itinerary, which defines how and when you will get from one stop to the next and complete the trip within available time. A project plan has to include some consideration of time, but it doesn’t need to go into details. • Creating a schedule can help you organize your thoughts: Creating a schedule is typically a practical endeavor, focused on planning actual work. However, you can also create a schedule as a way of organizing your thoughts and sharing what you have learned about the project. • Develop a schedule at the detail necessary to plan and coordinate: Planning beyond the necessary detail adds no value. A schedule pitched at too high a level runs the risk of missing key activities or identifying critical risks. A schedule that’s too comprehensive becomes a burden to update and can make it hard for team members to track activities, thus making it of little practical value. • Think of a schedule as a tool for communicating with stakeholders: Above all else, a schedule is a communication tool, devised to keep stakeholders up to date about all current knowledge about the project. That means it is a living document that can’t be considered final until the project is finished. A schedule should be updated regularly and revised to incorporate the latest knowledge and information as the project advances. Strive to develop and communicate the project schedule in a manner that is most helpful to project participants. • Planning for perfect execution inevitably leads to delays: Always plan for the imperfections of reality. Draw on your own past experience when you review a schedule to help you decide if it is realistic. If you don’t have any relevant past experience, then consult with someone who does. You might find it helpful to talk to a more experienced colleague. You can also draw on the many resources available within your industry. • From time to time ask yourself this important question: What is a reasonable number of activities for a single project? There’s no hard and fast answer to this question, as all projects are different and require differing degrees of activity definition. But as a rule of thumb, most people can successfully keep track of 30-50 activities. More than that and they start getting lost in the detail. Other team members might have sub-tasks of 30-50 activities, meaning an overall plan may have hundreds of rolled-up activities. • Understand the relationship between resource allocation and the critical path: In many cases, the critical path is only valid once resources have been allocated. If resources are over-allocated, the critical path might give you a false sense of security. • Do not schedule a task too early in the project, just because it’s on the critical path: Focusing on the critical path sometimes causes us to do things earlier than we need to, which can lead to mistakes and rework as the project constraints become clearer. In living order, we see projects as knowledge collection experiences and therefore avoid starting activities prematurely. • A schedule does not guarantee project success: Creating and updating a schedule is an ongoing process that must be adapted to externalities and needs of the customer and used to align stakeholders. ~Summary • A schedule is a specific, time-based map designed to help the project team get from the current state to successful project completion. Whereas a plan is like a football coach’s strategy for winning, a schedule is all about tactics. Above all else, it is a form of communication with everyone involved in the project. It should contain just the right amount of detail. • Making sure all stakeholders use the same terminology is crucial in all phases of project management, but it’s especially important when you are trying to get a group of diverse people to agree to a schedule. Important terms related to scheduling include milestone, activity, duration, resource, cost, and slack. • Scheduling is a phase of project management that necessarily blends geometric and living order. The planning and scheduling technique most closely associated with push planning and the geometric order is the critical path method (CPM). The scheduling technique that best exemplifies living order principles is a pull schedule created collaboratively by stakeholders, typically by using multi-colored Post-it notes. Details of this type of scheduling have been codified in the Last Planner System and in Agile. • The Critical Path Method (CPM) focuses on identifying the critical path of activities required to ensure project success, and then closely monitoring the activities on the critical path through the entire project. CPM is especially useful for large, complex projects where schedule interrelationships may not be readily apparent. It is the ultimate geometric order tool for project management and can lure you into a false sense of security regarding the predictableness of a project. However, it can be very helpful when you need to compress a schedule by fast tracking or crashing. • A pull schedule is by its very nature a work in progress. Creating it is a collaborative process, and it must be updated regularly in response to current conditions. An initial pull schedule is often created using color-coded Post-it notes that can be removed or repositioned as necessary. Pull scheduling, in the form of the Last Planner System (LPS) is essential to Lean. Schedules in the LPS focus on the last responsible moment and rely on the use of reliable promises. • In some situations, especially in governmental work, monitoring the critical path is a contractual obligation. But it is possible to overemphasize the critical path, thereby wasting the energy and attention of project stakeholders. • Focusing on project milestones is a good way to provide a high-level schedule that is useful for most stakeholders. ~Glossary • activity—“An element of work performed during the course of a project. An activity normally has an expected duration, an expected cost, and expected resource requirements” (Project-Management.com 2016). Beware that some organizations subdivide activities into tasks, while others use task and activity synonymously. • compress a schedule—The process of taking a schedule you have already developed and reducing it without adjusting the project’s scope. • cost—“An expenditure, usually of money, for the purchase of goods or services” (Law 2016). • crashing—A schedule compression technique that involves adding resources such as overtime or more equipment to speed up the schedule. Because of the costs involved in adding resources, crashing is “the technique to use when fast tracking has not saved enough time on the project schedule. With this technique, resources are added to the project for the least cost possible” (Monnappa 2017). • critical path—The “series of activities which determines the earliest completion of the project” (Project-Management.com 2016). • duration—“The time needed to complete an activity, path, or project” (Larson and Gray 2011, 659). • fast tracking—A schedule compression technique in which “activities that would have been performed sequentially using the original schedule are performed in parallel. In other words, fast tracking a project means the activities are worked on simultaneously instead of waiting for each piece to be completed separately. But fast tracking can only be applied if the activities in question can actually be overlapped” (Monnappa 2017). • float—See slack. • Last Planner System (LPS)—A proprietary production planning system that exemplifies living order concepts and pull thinking; developed by Glenn Ballard and Greg Howell as a practical implementaion of Lean principles. • last responsible moment—“The instant in which the cost of the delay of a decision surpasses the benefit of delay; or the moment when failing to make a decision eliminates an important alternative” (Lean Construction Institute). • milestone—“A significant event in the project; usually completion of a major deliverable” (State of Michigan: Department of Technology, Management & Budget). • path—“A sequence of connected activities” (Larson and Gray 2011, 662). • reliable promise—In Lean and the Last Planner System, a formal commitments between team members. As defined by the Lean Construction Institute, “A promise made by a performer only after self-assuring that the promisor (1) is competent or has access to the competence (both skill and wherewithal), (2) has estimated the amount of time the task will take, (3) has blocked all time needed to perform, (4) is freely committing and is not privately doubting ability to achieve the outcome, and (5) is prepared to accept any upset that may result from failure to deliver as promised” (Lean Construction Institute n.d.). • resource—“Any personnel, material, or equipment required for the performance of an activity” (Project-Management.com 2016). • schedule—A specific, time-based map designed to help the project team get from the current state to successful project completion. A schedule should build value, have an efficient flow, and be driven by pull forces. • slack— “Calculated time span during which an event has to occur within the logical and imposed constraints of the network, without affecting the total project duration” (Project-Management.com 2016). Or put more simply, slack, which is also called float, is the “amount of time that a task can be delayed without causing a delay” to subsequent tasks or the project’s ultimate completion date (Santiago and Magallon 2009). • sprint—In Agile project management, a brief (typically two-week) iterative cycle focused on producing an identified working deliverable (e.g., a segment of working code). • task—See activity. Brooks, Jr., Frederick P. 1975. The Mythical Man-Month. Boston: Addison-Wesley. Harrin, Elizabeth. 2017. “Learn What a Project Milestone Is.” The Balance Careers. October 13. https://www.thebalance.com/what-is-a...estone-3990338. Huber, Bob, and Paul Reiser. 2003. The Marriage of CPM and Lean Construction. The Boldt Company. Larson, Erik W., and Clifford F. Gray. 2011. Project Management: The Managerial Process, Sixth Edition. New York: McGraw-Hill Education. Law, Jonathan, ed. 2016. A Dictionary of Business and Management. Oxford: Oxford University Press. https://books.google.com/books?id=w3...page&q&f=false. Lean Construction Institute. n.d. “Glossary.” Lean Construction Institute. Accessed July 1, 2018. www.leanconstruction.org/training/glossary/#l. —. n.d. “The Last Planner.” Lean Construction Institute. Accessed July 1, 2018. www.leanconstruction.org/trai...-last-planner/. Levy, F. K., G. L. Thompson, and J. D. Wiest. 1963. “The ABCs of the Critical Path.” Harvard Business Review. https://hbr.org/1963/09/the-abcs-of-...al-path-method. Monnappa, Avantika. 2017. “Project Management Learning Series: Fast Tracking versus Crashing.” Simplilearn. December 19. https://www.simplilearn.com/fast-tra...ashing-article. Preactor. 2007. “Batch Scheduling in a Lean Manufacturing World.” Preactor. February. http://www.preactor.com/Batch-Schedu...acturing-World. Project-Management.com. 2016. “PMO and Project Management Dictionary.” PM Project Management.com. December 16. https://project-management.com/pmo-a...nt-dictionary/. Rouse, Margaret. 2015. “Critical Path Method (CPM).” WhatIs.TechTarget.com. January. http://whatis.techtarget.com/definit...ath-method-CPM. State of Michigan: Department of Technology, Management & Budget. 2013. “Project Management Key Terms, Definitions and Acronyms.” August. https://www.michigan.gov/documents/s...3_431285_7.pdf.
textbooks/biz/Management/Book%3A_Technical_Project_Management_in_Living_and_Geometric_Order_(Russell_Pferdehirt_and_Nelson)/1.07%3A_Project_Scheduling.txt
I may say that this is the greatest factor—the way in which the expedition is equipped—the way in which every difficulty is foreseen, and precautions taken for meeting or avoiding it. Victory awaits him who has everything in order—luck, people call it. Defeat is certain for him who has neglected to take the necessary precautions in time; this is called bad luck. —Roald Amundsen, Norwegian polar explorer (The South Pole: An Account of the Norwegian Antarctic Expedition in the “Fram,” 1910-1912, 370) Objectives After reading this chapter, you will be able to • Distinguish between risks and issues • Identify types of risks, and explain how team members’ roles can affect their perception of risk • Compare traditional, geometric-order risk management with a living-order approach • Discuss risks associated with product development and IT projects • Explain the concept of a black swan event and its relevance to risk management • Discuss the connections between ethics and risk management The Big Ideas in this Lesson • A risk is a specific threat to a project multiplied by the consequences if the threat materializes. With risk comes opportunity and new possibility, as long as you can clearly identify the risks you face and employ reliable techniques for managing them. • The living-order approach to risk management emphasizes sharing risk, with the most risk assigned to the party best able to manage it and most likely to benefit from it. By contrast, the more traditional, geometric order approach to risk focuses on avoiding it, shifting it to other parties whenever possible. • A risk is caused by external factors that the project team cannot control, whereas an issue is a known concern—something a team will definitely have to address. 8.1 Identifying Risks Most people use the term risk to refer to something bad that might happen, or that is unavoidable. The connotations are nearly always negative. But in fact, with risk comes opportunity and new possibility, as long as you can clearly identify the risks you face and employ reliable techniques for managing them. Risks can impact a project’s cost and schedule. They can affect the health and safety of the project team or the general public, as well as the local or global environment. They can also affect an organization’s reputation, and its larger operational objectives. More Risk-Related Terms Risk Capacity: The maximum amount of risk an organization can bear. Risk Appetite: The maximum amount of risk an organization is willing to assume. According to Larry Roth, Vice President at Arcadis and former Assistant Executive Director of the American Society of Civil Engineers, the first step toward identifying and managing risks is a precise definition of the term. He defines risk as “the probability that something bad will happen times the consequences if it does.” The likelihood of a risk being realized is typically represented as a probability value from 0 to 1, with 0 indicating that the risk does not exist, and 1 indicating that the risk is absolutely certain to occur. According to Roth, the term tolerable risk refers to the risk you are willing to live with in order to enjoy certain benefits (pers. comm., April 23, 2018). In daily life, we make risk calculations all the time. For example, when buying a new smartphone, you are typically faced with the question of whether to insure your new device. If you irreparably damage your phone, the potential consequence is the cost of a new phone—let’s say \$500. But what is the actual probability of ruining your phone? If you are going to be using your phone in your office or at home, you might think the probability is low, and so choose to forgo the insurance. In this case, a risk analysis calculation might look like this: 0.2 X \$500 = \$100 In other words, you might decide that the risk of damaging your phone is a tolerable risk, one you are willing to live with. The benefit you gain from tolerating that risk is holding onto the money you would otherwise have to pay for insurance. But what might make you decide that the risk of damaging the phone was not a tolerable risk? Suppose you plan to use your phone regularly on a boat. In that case, the chance of damaging it by dropping it in the water is high, and so your calculation might look like this: 0.99 X \$500 = \$495 For many people, this might make insurance might seem like a good idea. Your tolerance of risk is partly a matter of personality and attitude. This article describes a range of attitudes toward risk, ranging from “risk paranoid” to “risk addicted”: https://www.pmi.org/learning/library/risk-management-expected-value-analysis-6134. 8.2 Threats, Issues, and Risks Inexperienced project managers often make the mistake of confusing threats, issues, and risks. A threat is a potential hazard, such as dropping your phone in the water. A threat is not in itself a risk. A risk is the probability that the threat will be realized times the consequences. On the other end of the uncertainty spectrum are issues, which are known potential problems that the project team will definitely have to keep an eye on. For example, the mere possibility of exceeding a project’s budget is not a risk. It’s a well-known issue associated with any project; part of managing the project is managing the budget. But if your particular project involves extensive use of copper wiring, then an increase in the price of copper is a direct threat to your project’s success, and the associated risk is the probability of higher copper prices times the consequences of such an increase. Team members cannot control the price of copper; it is a risk that you’ll have to respond to, making decisions in response to the changing situation. Risk expert Carl Pritchard distinguishes between risks and issues as follows: “A risk is out there in the future, and we don’t know if it is going to happen; but if it does happen it will have an impact. Issues are risks realized. They are the risks whose time has come, so to speak” (Pritchard and Abeid 2013). That’s not to say that all issues used to be risks. And some things can be issues at an organizational level, but a risk when it comes to your particular project. Pritchard explains: An issue in your organization may be that management changes its mind….If your management is constantly changing their minds, time and time and time again—that’s an issue. But for your particular project, they haven’t changed their mind yet. So for your project it’s still a risk. It’s a future phenomenon, because it hasn’t happened to you yet. You’re anticipating that eventually it will become an issue. But for now, at least, it’s still out there in the future. (Pritchard and Abeid 2013) Table 8-1 compares issues, threats, and risks on different projects. Table 8.1: Distinguishing between issues, threats, and risks Project Issue Threat Risk Developing a new cell phone The phone must be released on schedule or consumers will consider it obsolete. Introduction of new features in a competing product, which would necessitate adding the same feature to your product. The probability that a competitor will introduce a new feature times the consequences in time and money required to remain competitive. Constructing a sea wall The sea wall must be resilient even if exposed to the most severe storm surge that can be anticipated given our current knowledge. Rising sea levels caused by climate change make it hard to predict the future meaning of the words “the most severe storm surge.” The probability of sea levels rising higher than the sea wall times the monetary and safety consequences of flooding. Constructing an addition to a clinic Cost of capital has significant impact on capital project decision-making. The Federal Reserve raises interest rates, increasing the cost of borrowing money for the project. The probability of rising interest rates times the increase to overall project cost if interest rates do go up. The Fine Art of Perceiving Risk A quick perusal of recent articles published in Risk Management magazine hints at the vast array of risks facing modern organizations. If you were asked to generate your own list, you might include environmental disasters, financial setbacks, and data theft as obvious risks. But what about the more obscure dangers associated with patent translations or cyber extortion? The following examines a few varieties of issues and related risks you might not have considered. Can you think of any issues and risks specific to your industry that you would add this list? • Human capital: Turnover among team members is an inevitable issue on long-running projects. People will come and go, and you have to be prepared to deal with that. But some forms of turnover go beyond issues and are in fact real risks. For example, one human capital risk might be loss of a key manager or technical expert whose relationship with the client is critical to keeping the contract. Team members behaving unethically is another human capital risk. Suppose a member on a highway construction project is fired for taking a bribe. This could have effects that ripple through the entire team for a long time to come. Team members might feel that their professional reputations are at risk, or they might decide that the team’s manager is not to be trusted. Once team cohesion begins to crumble in this way, it can be hard to put things back together. Other human capital issues include catastrophic work events and negligent hiring practices (Lowers & Associates 2013). For example, the 2013 launch of HealthCare.gov failed, in part, because the project team lacked software developers with experience launching a vast, nationwide website. Meanwhile, departures of vital staff members at the agency responsible for overseeing the insurance marketplace also hampered progress (Goldstein 2016). These unidentified human capital risks brought the project to a standstill. It was ultimately saved by a “hastily assembled group of tech wizards” with the know-how required to get the website up and running (Brill 2014). • Marketing: Project management teams often struggle to communicate with an organization’s marketing department. Rather than drawing on the marketing department’s understanding of customer needs, project teams often prefer to draw on their own technological know-how to create something cool, and then attempt to push the new product onto the market. But this can be a disaster if the new product reaches the market without the support of a fine-tuned marketing campaign. This is especially true for innovative products. For example, product developers might focus on creating the most advanced hardware for a smart thermostat, when in fact customers primarily care about having a software interface that’s easy to use. As in many situations, a pull approach—asking the marketing department to tell your team what the market wants—is often a better option. Of course, this necessitates a good working relationship with the marketing department, which is not something you can establish overnight. Sometimes a marketing risk takes the form of a product or service that only partly serves the customer’s needs. For example, one of the many problems with the rollout of the HealthCare.gov website, in 2013, was a design that “had capacity for just a fraction of the planned number of consumers who could shop for health plans and fill out applications”(Goldstein 2016). • Compliance: In many cases, you’ll need to make sure your project complies with “rules, laws, policies, and standards of governance that may be imposed by regulatory bodies and government agencies.” Indeed, some projects are exclusively devoted to compliance tasks and can “range from implementation of employment laws to setting up processes and structures for meeting and reporting statutory tax and audit requirements to ensuring compliance with industry standards such as health and safety regulations” (Ram 2016). In any arena, the repercussions of failing to follow government regulations can be extreme. Ensuring compliance starts with learning what regulations apply to your particular project and staying up-to-date on changes to applicable laws. (For more on compliance projects, see this blog post by Jiwat Ram: https://www.projectmanagement.com/articles/351236/Compliance-Projects–Fragile–Please-Handle-with-Care-.) Keep in mind that safety concerns can evolve quickly, as was the case with Samsung’s Galaxy Note 7 phone; millions of phones had to be recalled and the company’s new flagship smartphone scrapped after lithium-ion batteries caused devices to catch fire (Lee 2016). • Sustainability: Although businesses have always had to deal with issues associated with the availability of natural resources, in the past they rarely questioned the validity of a business model that presumed the consumption of vast amounts of natural resources. But as scientists provide ever more startling evidence that endless economic growth is not a realistic strategy for the human race, businesses have had to focus on issues related to sustainability if they want to survive. For one thing, people increasingly want to work for organizations they perceive as having a serious commitment to sustainability. Indeed, the need to recruit top talent in the automotive world is one motivation behind the on-going transformation of Ford’s Dearborn, Michigan campus into a sustainability showcase (Martinez 2016). Meanwhile, Ford’s \$11 billion investment in electric vehicles is a bid to remain viable in foreign markets that have more stringent sustainability requirements than the United States (Marshall 2018). A report on sustainability risks by Wilbury Stratton, an executive search firm, lists some specific sustainability risks: Social responsibility risks that threaten the license to operate a mining operation, risks tied to perceptions of over-consumption of water, and reputational risks linked to investments in projects with potentially damaging environmental consequences…. Additional trends in sustainability risk include risks to financial performance from volatile energy prices, compliance risks triggered by new carbon regulations, and risks from product substitution as customers switch to more sustainable alternatives. (2012) • Complexity: Complex projects often involve risks that are hard to identify at the outset. Thus, complex projects often require a flexible, adaptable approach to risk management, with the project team prepared to respond to new risks as they emerge. Complex projects can be derailed by highly detailed plans and rigid controls which can “lock the project management team into an inflexible mindset and daily pattern of work that cannot keep up with unpredictable changes in the project. Rather than reduce risk, this will amplify it and reduce [the team’s] capacity to achieve [its] goals. The effort to control risk might leave the team trying to tame a tiger while stuck in a straitjacket” (Broadleaf 2016). Agile was specifically developed to deal with the challenges associated with the kinds of complexity found in IT projects. Pull planning also offers advantages in complex environments, in part because it forces team members to communicate and stay flexible. Perhaps the hardest risks of all to prepare for are the risks that your training and professional biases prevent you from perceiving in the first place. As an engineer, you are predisposed to identify technical risks. You might not be quite as good at recognizing other types of risks. In Chapter 1 of Proactive Risk Management, Preston G. Smith and Guy M. Merritt list some risks associated with a fictitious product. The list includes marketing, sourcing, regulatory, and technical risks. In summing up, the authors point out two essential facts about the list of risks: “First, it is specific to this project and market at this point in time. Second, it goes far beyond engineering items” (2002, 2-3). Later in the book, in a chapter on implementing a risk management program, they have this to say about an engineer’s tendency to perceive only technical risks: Good risk management is cross-functional. If engineers dominate product development, you might consider letting engineering run project risk management. This is a mistake. If you assign risk management to the engineering department and engage only engineers to identify, analyze, and plan for risks, they will place only engineering risks on their lists. (186) How Team Members Perceive Risk The role team members play in a project can hugely affect their perception of risk. According to David Hillson, a consultant and author of many books on risk, a project sponsor (upper management or the customer) and the project manager perceive things very differently: • The project manager is accountable for delivery of the project objectives, and therefore needs to be aware of any risks that could affect that delivery, either positively or negatively. Her scope of interest is focused on specific sources of uncertainty within the project. These sources are likely to be particular future events or sets of circumstances or conditions which are uncertain to a greater or lesser extent, and which would have some degree of impact on the project if they occurred. The project manager asks, “What are the risks in my project?”…. • The project sponsor, on the other hand, is interested in risk at a different level. He is less interested in specific risks within the project, and more in the overall picture. Their question is “How risky is my project?”…. Instead of wanting to know about specific risks, the project sponsor is concerned about the overall risk of the project. This represents her exposure to the effects of uncertainty across the project as a whole. These two different perspectives reveal an important dichotomy in the nature of risk in the context of projects. A project manager is interested in “risks” while the sponsor wants to know about “risk.” While the project manager looks at the risks in the project, the project sponsor looks at the risk of the project. (Hillson 2009, 17-18) Even when you think you understand a particular stakeholder’s attitude toward risk, that person’s risk tolerance can change. For example, a high-level manager’s tolerance for risk when your organization is doing well financially might be profoundly different from the same manager’s tolerance for risk in an economic downturn. Take care to monitor the risk tolerance of all project stakeholders—including yourself. Recognize that everyone’s risk tolerances can change throughout the life of the project based on a wide range of factors. 8.3 Risk Management and Project Success Successful project managers manage the differing perceptions of risk, and the widespread confusion about its very nature, by engaging in systematic risk management. According to the Financial Times, risk management is “the process of identifying, quantifying, and managing the risks that an organization faces” (n.d.). In reality, the whole of project management can be thought of as an exercise in risk management because all aspects of project management involve anticipating change and the risks associated with it. The tasks specifically associated with risk management include “identifying the types of risk exposure within the company; measuring those potential risks; proposing means to hedge, insure, or mitigate some of the risks; and estimating the impact of various risks on the future earnings of the company” (Financial Times). Engineers are trained to use risk management tools like the risk matrix shown in figure 8-1, in which the probability of the risk is multiplied by the severity of consequences if the risk does indeed materialize. Figure 8-1: A risk matrix is a tool engineers often use to manage risk This and other risk management tools can be useful because they provide an objective framework for evaluating the seriousness of risks to your project. But any risk assessment tool can do more harm than good if it lulls you into a false sense of security, so that you make the mistake of believing you really have foreseen every possible risk that might befall your project. You don’t want to make the mistake of believing that the tools available for managing risk can ever be as precise as the tools we use for managing budgets and schedules, even as limited as those tools are. Perhaps the most important risk management tool is your own ability to learn about the project. The more you know about a project, the better you will be at foreseeing the many ways the project could go awry and what the consequences will be if they do, and the better you will be at responding to unexpected challenges. The Risk of Failing to Define “Success” Accurately Different people will have different interpretations of the nature of risks associated with a company’s future earnings, depending on how broadly one defines “success” and consequently the risks that affect the likelihood of achieving success. An example of a company failing to define “success” over the long term, with tragic consequences, is the 2010 BP spill, which poured oil into the Gulf of Mexico for 87 days. This event, now considered one of the worst human-caused disasters in history, started with a methane explosion that killed 11 workers and ultimately sank the Deepwater Horizon oil rig. After the explosion, engineers counted on a specialized valve, called a blind shear ram, to stop the flow of oil. For reasons that are still not entirely understood, the valve failed, allowing the oil to pour unchecked into the Gulf. Despite the well-known vulnerability of the blind shear ram, and the extreme difficulties of drilling at a site that tended to release “powerful ‘kicks’ of surging gas,” BP chose not to install a backup on the Deepwater Horizon. In hindsight, that now looks like an incredibly short-sighted design and construction decision, especially considering the fact that nearly all oil rigs in the gulf are equipped with a backup blind shear ram to prevent exactly the type of disaster that occurred on the Deepwater Horizon (Barstow, et al. 2010). The well’s installation might have been considered “successful” at the time of completion if it met schedule and budget targets. However, if BP had focused more on long-term protection of the company’s reputation and success, and less on the short-term economics of a single oil rig, the world might have been spared the Deepwater Horizon catastrophe. As companies venture into ever deeper waters to drill for oil, this type of risk management calculation will become even more critical. A New Approach to Risk Management In traditional, geometric order thinking, risk is a hot potato contractually tossed from one party to another. In capital projects in particular, risk has traditionally been managed through aversion rather than fair allocation. Companies do everything they can to avoid suffering the consequences of uncertainty. Unfortunately, this often results in parties being saddled with risk they can’t manage or survive. As explained in an interview with John Nelson, this often means that, in capital projects, customers are unsatisfied because conflict inherent in improper risk allocation often results in expensive and unwanted outcomes, such as numerous RFIs [Requests for Information] and change orders, redesign, delays, spiraling project costs, loss of scope to “stay in budget,” claims and disputes, a changing cast of players, poorly functioning or unmaintainable designs, unmet expectations, productivity losses, and in a worst case: lawsuits. (Allen 2007) The essential problem with traditional risk management in capital projects is that it forces each party to act in its own interests, not in the common interest because there is no risk-sharing that binds together the owner, architect, and constructor. In the traditional model, the owner often unintentionally presumes he will get the minimum—the lowest quality of a component, system, etc.—that all the other project parties can get away with. So, when a problem occurs, such as a delay, each party naturally acts to protect their own interests rather than look for a common answer. (Allen 2007) The traditional approach sees risk as something bad that must be avoided as much as possible. By contrast, a living order approach sees risk as a sign of opportunity, something essential to capitalism and innovation. Rather than tossing a hot potato of risk from stakeholder to stakeholder, a living order approach seeks a more equitable, holistic form of risk-sharing, with the most risk assigned to the party best able to manage it and most likely to benefit from it. In a capital project, for instance, the owner must assume “some of the risk because at the end of the day the owner has the long-term benefit of a completed facility” (Allen 2007). Risk Management Calculations Can be Risky In their book Becoming a Project Leader, Laufer et al. question the usefulness of risk management calculations, positing redundancies as a better method for handling unpredictable events in projects. They point to “Zur Shapira, who asked several hundred top executives (of permanent organizations, not of projects) what they thought about risk management, [and] found they had little use for probabilities of different outcomes. They also did not find much relevance in the calculate-and-decide paradigm. Probability estimates were just too abstract for them. As for projects, which are temporary and unique endeavors, it is usually not possible to accumulate sufficient historical data to develop reliable probabilities, even when the risky situation can be clearly defined” (Shapira 1995). Laufer et al. also point to the expertise of Brian Muirhead from NASA, who “disclosed that when his team members were asked to estimate the probability of failures, ‘many people simplistically assigned numbers to this analysis—implying a degree of accuracy that has no connection with reality’” (Muirhead 1999). Furthermore, Gary Klein, in his analysis “The Risks of Risk Management,” concluded unequivocally, “In complex situations, we should give up the delusion of managing risk. We cannot foresee or identify risks, and we cannot manage what we can’t see or understand” (Klein 2009). It therefore behooves us to build in some redundancies so that we’re able to cope with problems that may arise (Laufer, et al. 2018). In a report on managing the extensive risks involved in highly complex infrastructure projects, Frank Beckers and Uwe Stegemann advocate a “forward-looking risk assessment” that evaluates risk throughout the project’s entire lifecycle. The questions they raise are helpful to ask about any type of project unfolding in living order: • Forward-looking risk assessment: Which risks is the project facing? What is the potential cost of each of these risks? What are the potential consequences for the project’s later stages as a result of design choices made now? • Risk ownership: Which stakeholders are involved, and which risks should the different stakeholders own? What risk-management issues do each of the stakeholders face, and what contribution to risk mitigation can each of them make? • Risk-adjusted processes: What are the root causes of potential consequences, and through which risk adjustments or new risk processes might they be mitigated by applying life-cycle risk management principles? • Risk governance: How can individual accountability and responsibility for risk assessment and management be established and strengthened across all lines of defense? • Risk culture: What are the specific desired mind-sets and behaviors of all stakeholders across the life cycle and how can these be ensured? (Beckers and Stegemann 2013) When thinking of risk and a project’s life cycle, it’s important to remember that many manufacturing companies, such as Boeing and John Deere, have begun focusing on making money from servicing the products they produce—putting them in the same long-term service arena as software developers. At these companies, project managers now have to adopt a greatly expanded view of a product’s life cycle to encompass, for example, the decades-long life span of a tractor. Living as we do in an era when time is constantly compressed, and projects need to be completed faster and faster, it can be hard to focus on the long-term risks associated with a product your company will have to service for many years. All forms of business are in need of a radical rethinking of risk management. For starters, in any industry, it’s essential to collaborate on risk management early on, during project planning, design, and procurement. The more you engage all key stakeholders (e.g., partners, contractors, and customers) in the early identification of risks and appropriate mitigation strategies, the less likely you are to be blindsided later by unexpected threats. In addition to paying attention to risk early, a good risk manager also practices proactive concurrency, which means intentionally developing an awareness of options that can be employed if things don’t work out. This doesn’t necessarily mean you need to have a distinct plan for every possible risk. But you should strive to remain aware of the potential for risk in every situation and challenge yourself to consider how you might respond. At all times, be alert to consequences that are beyond your team’s control. Sometimes management’s definition of project success is tied to longer-term or broader outcomes, often involving things well outside the control of the project’s stakeholders. If you find yourself in that situation, do all you can to raise awareness of the consequences of a threat to the project being realized, emphasizing how they might affect the broader organization. It is then up to the senior management, who presumably has the authority and ability to influence wider aspects, to take action or make adjustments. Product Development Risks In product development, the most pressing risks are often schedule-related, where it is essential to get the product out to recover the initial investment and minimize the risk of obsolescence. In this environment, anything that can adversely affect the schedule is a serious risk. A less recognized product development risk is complacency: product designers become so convinced they have created the best possible product that they fail to see drawbacks that customers identify the second the product reaches the market. An opportunity as the opposite of a threat. For example, anything that can positively affect a schedule is an opportunity. This famously happened in 2010 with Apple’s iPhone 4, which tended to drop calls if the user interfered with the phone’s antenna by touching the device’s lower-left corner. At first, Apple chose to blame users, with Steve Jobs infamously advising annoyed customers, “Just avoid holding it that way” (Chamary 2016). Eventually, the company was forced to offer free plastic cases to protect the phone’s antenna, but by then the damage was done. Even die-hard Apple customers grew wary of the brand, the company’s stock price fell, and Consumer Reports announced that it could not recommend the iPhone 4 (Fowler, Sherr and Sheth 2010). Product development firms are especially susceptible to market risks. Maintaining the power of a company’s brand is a major issue that can lead to numerous risks. One such risk is the erosion of a long-established legacy of consumer trust in a particular brand. A new, negative association (think of the Volkswagen emissions-control software scandal) can drive customers away, sabotaging the prospects for new products for years to come. Even changing a company logo presents great risks. This blog post describes redesign failures, some of which costs hundreds of millions of dollars: https://www.canny-creative.com/10-rebranding-failures-how-much-they-cost/. Another market risk is a sudden, unexpected shift in consumer preferences, as occurred in the 1990’s, when, in response to an economic downturn, consumers switched from national brand groceries to less expensive generic brand, and never switched back, even after the economy improved. These days, higher quality generic brands, such as Costco’s Kirkland brand, are big business—a development few analysts saw coming (Danziger 2017). Market risks can undermine all the good work engineers do in developing a product or service. For example, Uber engineers excelled in developing a system that employs geo-position analytics to enable vehicles, drivers, and riders to efficiently connect. However, the company failed to assess and address key market issues like rider safety, governmental approvals, and data security (Chen 2017). In traditional product development, risk management is relegated to research and development, with marketing and other teams maintaining a hands-off approach. But as products grow more complex, this tendency to focus only on technical risks actually increases the overall risk of project failure. In the product development world, the risk of failure is increasingly dictated by when the product arrives in the marketplace, and what other products are available at that time. In other words, in product development, schedule risks can be as crucial, if not more crucial, than technical risks. In an article adapted from his book Developing Products in Half the Time, Preston G. Smith argues, “When we view risk as an R&D issue, we naturally concentrate on technical risk. Unfortunately, this guides us into the most unfruitful areas of risk management, because most products fail to be commercially successful due to market risks, not technical risks.” Even at companies that tend to look at the big picture, “engineers will tend to revert to thinking more narrowly of just technical risk. It is management’s job to keep the perspective broad” (1999). Companies that lack the broad perspective end up piling risk on risk. Although others may think of risk as solely a technical issue and attribute it to the R&D department, most risk issues have much broader roots than this. If you treat development risk as R&D-centric, you simply miss many risks that are likely to materialize later. If others try to place responsibility for product-development risk on R&D, they unwittingly mismanage the problem. (Smith 1999) Smith advocates for a proactive approach to risk management in which companies identify threats early and work constantly to “drive down” the possibility of a threat actually materializing, while staying flexible on unresolved issues as long as possible. He provides some helpful risk management techniques that you can read about on pages 30-32 of this article: http://www.strategy2market.com/wp-content/uploads/2014/05/Managing-Risk-As-Product-Development-Schedules-Shrink.pdf. IT Risks The IT world faces a slew of risks related to the complexity of the products and services it provides. As a result, IT projects are notoriously susceptible to failure. In fact, a recent survey reported a failure rate of over 50% (Florentine 2016). This figure probably under reports the issue because it focuses on the success rate for IT projects in the short run—for example, whether or not developers can get their software up and running. But as software companies rely more and more on a subscription-based business model, the long-term life cycle of IT products becomes more important. Indeed, in a world where software applications require constant updates, it can seem that some IT projects never end. This in turn, raises more risks, with obsolescence an increasing concern. Add to this the difficulties of estimating in IT projects and the cascading negative effects of mistakes made upfront in designing software architecture, and you have the clear potential for risk overload. By focusing on providing immediate value, Agile helps minimize risk in software development because the process allows stakeholders to spot problems quickly. Time is fixed (in preordained sprints), so money and scope can be adjusted. This prevents schedule overruns. If the product owner wants more software, she can decide this bit-by-bit, at the end of each sprint. In a blog post about the risk-minimizing benefits of Agile, Robert Sfeir writes, Agile exposes and provides the opportunity to recognize and mitigate risk early. Risk mitigation is achieved through cross-functional teams, sustainable and predictable delivery pace, continuous feedback, and good engineering practices. Transparency at all levels of an enterprise is also key. Agile tries to answer questions to determine risk in the following areas, which I will discuss in more detail in a future post: • Business: Do we know the scope? Is there a market? Can we deliver it for an appropriate cost? • Technical: Is it a new technology? Does our Architecture support it? Does our team have the skills? • Feedback (verification & validation): Can we get effective feedback to highlight when we should address the risks? • Organizational: Do we have what we need (people, equipment, other) to deliver the product? • Dependency: What outside events need to take place to deliver the project? Do I have a plan to manage dependencies? (2015) Keep in mind that in all industries, simply identifying threats is only the first step in risk management. Lots of time and money could be lost by failing to understand probabilities and consequences, causing your team to place undue management focus on threats that have a low probability of occurrence, or that may have minimal impact. Monetizing Risk One way to manage risk is to monetize it. This makes sense because risk usually manifests itself as additional costs. If a project takes longer than expected or requires additional resources, costs go up. Thus, to fully understand the impact of the risks facing a particular project, you may need to assign a dollar value to (that is, monetize) the potential impact of each risk. Monetizing risks gives outcomes “real economic value when the effects might otherwise be ignored” (Viscusi 2005). Once you’ve monetized a project’s risks, you can rank them and make decisions about which deserves your most urgent attention. Every industry has its own calculations for monetizing risks. For example, this article includes a formula for monetizing risks to networks and data: http://www.csoonline.com/article/2903740/metrics-budgets/a-guide-to-monetizing-risks-for-security-spending-decisions.html. Keep in mind that monetizing certain risks is controversial. In some instances, it is acceptable or even required. One example is the value of a statistical life, which is “an estimate of the amount of money the public is willing to spend to reduce risk enough to save one life.” The law requires U.S. government agencies to use this concept in “a cost-benefit analysis for every regulation expected to cost \$100 million or more in a year” (Craven McGinty 2016). This article describes the varying ways society directly or indirectly values human lives: http://www.nytimes.com/2007/09/09/weekinreview/09marsh.html In other circumstances, most notably in product safety, it is clearly unethical to make a decision based strictly on monetary values. An example of this is the famous decision by Ford Motor Company in the early 1970’s to forgo a design change that would have required retooling the assembly line to reduce the risk of death and injury from rear impacts to the Ford Pinto car. The company’s managers made this decision based on a cost-benefit analysis that determined it would be cheaper to go ahead and produce the faulty car as originally designed, and then make payments as necessary when the company would, inevitably, be sued by the families of people killed and injured in the cars. Public outrage over this decision and the 900 deaths and injuries caused by the Pinto’s faulty fuel tank clearly demonstrated the need for product safety design decisions to be based on broader considerations than a simple tradeoff analysis based on the cost of improved design versus an assigned value for the value of lives saved. 8.4 Reporting on Risks Every well-run organization has its own process for reporting on threats as a project unfolds. In some projects, a weekly status report with lists of threats color-coded by significance is useful. In others, a daily update might be necessary. In complicated projects, a project dashboard, as described in Lesson 11, is essential for making vital data visible for all concerned. The type of contract binding stakeholders can affect everyone’s willingness to share their concerns about risk. In capital projects, the traditional design/bid/build arrangement tends to create an adversarial relationship among stakeholders. As David Thomack, Chief Operating Officer at Suffolk Construction, explained in a lecture on Lean and project delivery, this type of arrangement forces stakeholders to take on adversarial roles to protect themselves from blame if something goes wrong (2018). This limits the possibilities for sharing information, making it hard to take a proactive approach to project threats, which would in turn minimize risk throughout the project. Instead, stakeholders are forced to react to threats as they arise, an approach that results in higher costs and delayed schedules. By contrast, a more Lean-oriented contractual arrangement like Integrated Project Delivery, which emphasizes collaboration among all participants from the very earliest stages of the project, encourages participants to help solve each other’s problems, taking a proactive approach to risk (2018). In this environment, it’s in everyone’s best interests to openly acknowledge risks and look for ways to mitigate them before they can affect the project’s outcome. Whatever process your organization and contract arrangement requires, keep in mind that informing stakeholders or upper management about a threat is meaningless if you do it in a way they can’t understand, or if you don’t clarify exactly how urgent you perceive the risk to be. When deciding what to include in a report, think about what you expect your audience to be able to do with the information you provide. And remember to follow up, making sure your warning has been attended to. As a project manager, you should focus on 1) clearly identifying risks, taking care not to confuse project issues with risks, and 2) clearly reporting the status of those risks to all stakeholders. If you are reporting on risks that can affect the health and safety of others, you have an extra duty to make sure those risks are understood and mitigated when necessary. Here are two helpful articles with advice on managing and reporting project risks: 8.5 The Big Picture is Bigger than You Think If, as a risk manager, you spend all your time on a small circle of potential risks, you will fail to identify threats that could, in the long run, present much greater risks. And then there’s the challenge of calculating the cost of multiple risks materializing at the same time, i.e., a perfect storm of multiple, critical risks materializing simultaneously. In such a situation, calculating risks is rarely as simple as summing up the total cost of all the risks. In other words, you need to keep the big picture in mind. And when it comes to risk, the big picture is nearly always bigger than you think it is. Nassim Nicholas Taleb has written extensively about the challenges of living and working in a world where we don’t know—indeed can’t ever know—all the facts. In his book Black Swan: The Impact of the Highly Improbable, he introduces his theory of the most extreme form of externality, which he calls a black swan event. According to Taleb, a black swan event has the following characteristics: First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact. Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable (2010, xxii) . He argues that it is nearly impossible to predict the general trend of history, because history-altering black swan events are impossible to predict: A small number of Black Swans explain almost everything in our world, from the success of ideas and religions, to the dynamics of historical events, to elements of our own personal lives. Ever since we left the Pleistocene, some ten millennia ago, the effect of these Black Swans has been increasing. It started accelerating during the industrial revolution, as the world started getting more complicated, while ordinary events, the ones we study and discuss and try to predict from reading the newspapers, have become increasingly inconsequential. Just imagine how little your understanding of the world on the eve of the events of 1914 would have helped you guess what was to happen next. (Don’t cheat by using the explanations drilled into your cranium by your dull high school teacher). How about the rise of Hitler and the subsequent war?… How about the spread of the Internet?… Fads, epidemics, fashion, ideas, the emergence of art genres and schools. All follow these Black Swan dynamics. Literally, just about everything of significance around you might qualify. (2010, xxii) One of Taleb’s most compelling points is that these supposedly rare events are becoming less rare every day, as our world grows more complicated and interconnected. While you, by definition, can’t expect to foresee black swan events that might affect your projects, you should at least strive to remain aware that the most unlikely event in the world could in fact happen on your watch. As a thought experiment designed to expand your appreciation for the role of randomness and luck in modern life, Taleb suggests that you examine your own experience: Count the significant events, the technological changes, and the inventions that have taken place in our environment since you were born and compare them to what was expected before their advent. How many of them came on a schedule? Look into your own personal life, to your choice of profession, say, or meeting your mate … your sudden enrichment or impoverishment. How often did these things occur according to plan? (2010, xiii) The goal of this line of thinking is not to induce paralysis over the many ways your plans can go awry, but to encourage you to keep your mind open to all the possibilities, both positive and negative, in any situation. In other words, you need to accept the uncertainty inherent in living order. That, in turn, will make you a better risk manager because you will be less inclined to believe in the certainty of your plans. Keep in mind that engineers, especially early-career engineers, tend to be uncomfortable with uncertainty and ambiguity. They’re trained to seek clarity in all things, which is a good thing. But they also need to accept ambiguity and uncertainty as part of living order. Strive to develop the ability to assess, decide, observe, and adjust constantly throughout the life of a project and your career. 8.6 Contingency Planning and Probabilistic Risk Modeling Contingency planning is the development of alternative plans that can be deployed if certain threats are realized (e.g., parts from a supplier do not meet quality requirements). Not all types of risk involve unexpected costs. Some are more a matter of having a Plan B in your back pocket to help you deal with a risk that becomes a reality. For example, if you are working with a virtual team scattered across the globe, one risk is that team members will not be able to communicate reliably during weekly status meetings. In that case, the project manager would be wise to have a contingency plan that specifies an alternative mode of communication if, for instance, a Skype connection is not functioning as expected. However, for most risks, contingency planning comes down to setting aside money—a contingency fund—to cover unexpected costs. As discussed in Lesson 6, on small projects of limited complexity, a contingency fund consisting of a fixed percentage of the overall budget will cover most risks. But if you are working on expensive, complex projects, you will probably be required to use models generated by specialized risk analysis software. Such tools can help you determine what risks you need to or can afford to plan for. They do a lot of the number crunching for you, but still require some expert knowledge and the intelligence to enter appropriate inputs. You need to avoid the trap of accepting the outputs uncritically just because you assume the inputs were valid in the first place. To analyze risks related to costs, organizations often turn to Monte Carlo simulations, a type of probabilistic modeling that aggregates “a series of distributions of elements into a distribution of the whole” (Merrow 2011, 324). That is, the simulation aggregates a range of high and low values for various costs. For example, when generating a Monte Carlo simulation, a project team might look at the cost of labor, assigning “an amount above and below the estimated value. The distribution might incorporate risk around both changes (increases) in hourly cost and productivity” (Merrow 2011, 324). Keep in mind that Monte Carlo simulations, like other types of probabilistic risk modeling, are only useful if their underlying assumptions are accurate. To learn more about Monte Carlo simulations, see this helpful explanation: http://news.mit.edu/2010/exp-monte-carlo-0517. No matter what approach you take, the most valuable part of any contingency planning is the thinking that goes into the calculation, rather than any particular number generated by the calculation. Thinking carefully about the risks facing your project, and discussing them with others, is the best way to identify the areas of uncertainty in the project plan. This is why simple percentages or even Monte Carlo calculations may be counter-productive—they might encourage you to defer to a set rule or a program to do the thinking for you. According to Larry Roth, the fundamental risk calculation—the probability of a threat materializing times the consequences if it does—should guide your thinking about contingency planning. If both the probability and consequences of a particular threat are small, then it’s probably is not worth developing a full-blown contingency plan to deal with it. But if the probability or consequences are high, then a formal contingency plan is a good idea (pers. comm., April 23, 2018). 8.7 Ethics and Risk Management Engineering and ethics have been in the news a great deal in recent years, in stories about the BP oil spill, the Volkswagen emissions-control software scandal, and the General Motors ignitions switch recall. These stories remind us that decisions about risk inevitably raise ethical questions because the person making the decision is often not the one who will actually suffer the consequences of failure. At the same time, unethical behavior is itself a risk, opening an organization to lawsuits, loss of insurance coverage, poor employee morale (which can lead to more unethical behavior), and diminished market share, just to name a few potentially crippling problems. An article on the website for the International Risk Management Institute explains the link between risk management and ethics as essentially a matter of respect: Ethics gives guidelines for appropriate actions between persons and groups in given situations—actions that are appropriate because they show respect for others’ rights and privileges, actions that safeguard others from embarrassment or other harm, or actions that empower others with freedom to act independently. Risk management is based on respect for others’ rights and freedoms: rights to be safe from preventable danger or harm, freedoms to act as they choose without undue restrictions. Both ethics and risk management foster respect for others, be they neighbors, employees, customers, fellow users of a good or service, or simply fellow occupants of our planet—all sharing the same rights to be safe, independent, and hopefully happy and productive. Respect for others, whoever they may be, inseparably links risk management and ethics. (Head 2005) Why do people behave unethically? That’s a complicated, interesting question—so interesting, in fact, that it has been the motivation for a great deal of human art over many centuries, from Old Testament stories of errant kings to Shakespeare’s histories to modern TV classics like The Sopranos. In the following sections, we’ll explore some factors that affect the ethical decision-making of the average person. But first, it’s helpful to remember that not everyone is average. Some people are, at heart, deeply unethical. Scientists estimate that 4% of the human population are psychopaths (also called sociopaths)—meaning they have no empathy and no conscience, have no concept of personal responsibility, and excel at hiding the fact that their “psychological makeup is radically different” from most people (Stout 2005, 1). If you find yourself dealing with someone who constantly confounds your sense of right and wrong, consider the possibility that you are dealing with a psychopath. The books The Sociopath Next Door, by Martha Stout, and Sharks in Suits: When Psychopaths Go to Work, by Paul Babiak, offer ideas on how to deal with people who specialize in bad behavior. Sometimes, the upper managers of an organization behave, collectively, as if they have no empathy or conscience. They set a tone at the top of the organizational pyramid that makes their underlings think bad behavior is acceptable, or at least that it will not be punished. For example, the CEO of Volkswagen said he didn’t know his company was cheating on diesel engines emission tests. Likewise, the CEO of Wells Fargo said he didn’t know his employees were creating fake accounts in order to meet pressing quotas. One can argue whether or not they should have known, but it’s clear that, at the very least, they created a culture that not only allowed cheating, but rewarded it. Sometimes the answer is to decentralize power, in hopes of developing a more open, more ethical decision-making system. But as Volkswagen is currently discovering as they attempt to decentralize their command-and-control structure, organizations have a way of resisting this kind of change (Cremer 2017). Still, change begins with the individual. The best way to cultivate ethical behavior is to take some time regularly to think about the nature of ethical behavior and the factors that can thwart it. Let’s start with the question of personal values. Context and Ethical Decision Making: Values Since ancient times, philosophers have wrestled with questions about ethics and morality. How should we behave? If people know the right thing to do, can we count on them to do it? Does behaving ethically automatically lead to happiness? Do the consequences of an action determine if it was ethical, or can an action be ethical or unethical in its own right? Culture and Ethics As you learned in Lesson 5, people from different cultures can have completely different interpretations of the same phenomenon. This is definitely true about ethical judgements. What might be considered bribery in one culture could be considered a perfectly acceptable attempt at building a relationship in another. To learn how to navigate such situations, see Riding the Waves of Culture: Understanding Diversity in Global Business a book by Fons Trompenaars and Charles Hampden-Turner. It includes excellent case studies and quizzes on cultural perceptions. One of the most important philosophical questions is this: How can we tell if an action is ethical? We all like to think we have a reliable inner compass that points us in the direction of good behavior—or that at least tells us when something we are determined to do is bad. But research has shown that the average person’s sense of right and wrong is often determined more by context than principle and reason. For example, one study looked at the factors that might cause people to report on their colleagues’ behavior, as Edward Snowden did when he released top secret documents regarding National Security Agency surveillance practices. Is whistle-blowing an act of heroism or betrayal? According to researchers Adam Waytz, James Dungan, and Liane Young, your answer to that question depends on whether you value loyalty more than fairness, or fairness more than loyalty. And which you value more depends at least in part on context. In the study, people who were asked to write about the importance of fairness were then more likely to act as whistle-blowers when faced with evidence of unethical behavior. Meanwhile, people who were asked to write about the importance of loyalty were less likely to act as whistle-blowers, presumably because they had been primed by the writing exercise to value loyalty to the group above all else. In a New York Times article about their study, the researchers conclude, This does not mean that a five-minute writing task will cause government contractors to leak confidential information. But our studies suggest that if, for instance, you want to encourage whistle-blowing, you might emphasize fairness in mission statements, codes of ethics, honor codes or even ad campaigns. And to sway those who prize loyalty at all costs, you could reframe whistle-blowing, as many have done in discussing Mr. Snowden’s case, as an act of “larger loyalty” to the greater good. In this way, our moral values need not conflict. (Waytz, Dungan and Young 2013) Other examples of our perceptions of right and wrong that will continue to challenge us for the foreseeable future include: • Liberal versus conservative • Big government versus big business • Immigration policies • Culture, faith and religion • Security and safety • Fair pay • Work ethic • Perceptions about power and authority At some point, you might find that your personal values conflict with the goals of your organization or project. In that case, you should discuss the situation with colleagues who have experience with similar situations. You should also consult the Code of Ethics for Engineers, published by the National Society of Professional Engineers, which is available here: https://www.nspe.org/sites/default/files/resources/pdfs/Ethics/CodeofEthics/Code-2007-July.pdf. Context and Ethical Decision-Making: Organizational Structure The structure of an organization can also affect ethical decision-making, often in profound ways. In Team of Teams: New Rules of Engagement for a Complex World, General Stanley McChrystal and his coauthors describe the case of the faulty ignition switch General Motors used in the Chevy Cobalt and Pontiac G5. The failure of these switches killed thirteen people, most of them teenagers whose parents had bought the cars because they thought the Cobalt and G5 were safe though inexpensive vehicles. “What the public found most shocking, however, was not the existence of the ignition switch issue or even the age of its victims, but the time it had taken GM to address the problem” (2015, 188). Ten years elapsed between the first customer complaint and GM’s first attempt to solve the deadly problem. In the public imagination, GM emerged as negligent at best. However, according to McChrystal et al., the reality “was more complex. What seemed like a cold calculation to privilege profits over young lives was also an example of institutional ignorance that had as much to do with management as it did with values. It was a perfect and tragic case study of the consequences of information silos and internal mistrust” (2015, 188). The company was “riddled with a lack of contextual awareness and trust” that prevented individuals from recognizing and acting on the failed ignition switch. The problem, for which there was an easy and inexpensive solution, floated from committee to committee at GM, because the segregated information silos prevented people from grasping its true nature. “It would take a decade of demonstrated road failures and tragedies before the organization connected the dots” (2015, 192). McChrystal and his coauthors conclude that “GM’s byzantine organizational structure meant that nobody—venal or kindly—had the information” required to make the calculations that would have revealed the flaws in the ignition switch (2015, 193). Managing Risk through Ethical Behavior As the GM example and countless others demonstrate, the consequences of unethical behavior can be catastrophic to customers and the general public. It can also pose an enormous risk to the company itself, with overwhelming financial implications. In the long run, being ethical is simply good business practice and the responsibility of professional engineers. Because context can more powerfully motivate personal behavior than reason and principle, every organization should • Develop, actively promote, and reinforce a set of clearly stated organizational values • Encourage leaders to model ethical behavior and explicitly tie decisions and behaviors to the organization’s values • Create a general climate in which discussions about ethics and related decisions are the norm • Create systems that motivate ethical behavior Ultimately, ethical behavior is not just a matter of individual choices, but an ongoing process of discussion and engagement with questions of right and wrong. ~Practical Tips • Work as a team: When you identify risks, a team approach is very helpful. Get multiple sets of eyes looking at the same project, but perhaps from different perspectives. • Use a project dashboard to keep all important metrics where everyone can see them: By practicing good visual management, you’ll make it easy to see if a project is on or off schedule. You’ll learn more about dashboards in Lesson 11. • Remember that you probably know less than you think you do: When analyzing risk, keep in mind that people usually underestimate their uncertainty and over-estimate the precision of their own knowledge and judgment. For example, on capital projects, we tend to be overly optimistic in terms of cost and schedule, and we tend to underestimate many other factors that might have a significant impact. Consider asking experts with no direct interest in the project to help with identifying risks that may not be obvious to those more closely involved. • Stay informed: To improve your ability to manage risk, stay informed about world events, politics, scientific and technological developments, market conditions, and finance, which can in turn affect the availability of capital required to complete your project. You never know where the next risk to your project will come from. Your goal is not to foresee every possibility, but to stay attuned to the ever-changing currents of modern life, which may in turn affect your work in unexpected ways. • Don’t be inordinately risk adverse: You have to assess the risks facing a project realistically, and confront them head-on, so that you can make fully informed decisions about how to proceed. You might decide to take some risks when the potential reward justifies it, and when the worst-case outcome is survivable. Further, some risks may create new opportunities to expand the services your organization offers to affected markets. • Keep in mind that externalities can change everything: In our interconnected global economy, externalities loom ever larger. The disruptions Toyota factories around the world experienced as a result of the 2014 earthquake and tsunami are just one example of how a company’s faith in its supposedly impregnable supply chain can prove to be “an illusion,” as a Toyota executive told Supply Chain Digest (2012). • Make sure everyone’s speaking the same language: To manage risk effectively, you need to make sure all stakeholders have the same understanding of what constitutes a high risk, a medium risk, and a low risk. This is especially important when considering inputs from several different project managers across a portfolio of projects. Each project manager might have a different tolerance for risk, and so assign varying risk values for the same real risk. It’s helpful to have a defined set of risk definitions that specify your organization’s thresholds for high, medium, and low risks, taking into account the probability and level of consequences for each. • Quantify risk: Quantifying risk is not always possible, but it does enforce some objectivity. According to Larry Roth, “a key reason for quantifying risk is to be able to understand the impact of risk mitigation. If you are faced with a threat, you should be looking at ways to reduce that threat. For example, you might be able to reduce its probability of occurring. In the case of flooding, for instance, you could build taller levees. Or, you could reduce the consequences of flooding by moving people out of the flood path. A real benefit of risk analysis is the ability to compare the cost of reducing risk to the cost of living with the risk”(pers. comm., November 30, 2018). For example, you could quantify risk in terms of percent behind schedule or over budget, as follows: • High is +/- 20% budget • Medium is +/- 10% budget • Low is +/- 5% budget • Be on the lookout for ways to mitigate risk: Mitigating a risk means you “limit the impact of a risk, so that if it does occur, the problem it creates is smaller and easier to fix” (DBP Management 2014). For example, if one risk facing a new water treatment project on public land is that people in the neighborhood will object, you could mitigate that risk by holding multiple events to educate people on the sustainability benefits the new facility will provide to the community. • Listen to your intuition: While quantifying risk is very helpful, you shouldn’t make any risk assessment decisions by focusing purely on numbers. The latest research on decision-making suggests the best decisions are a mix of head and intuition, or gut feelings. In an interview with the Harvard Business Review, Gerd Gigerenzer explains, “Gut feelings are tools for an uncertain world. They’re not caprice. They are not a sixth sense or God’s voice. They are based on lots of experience, an unconscious form of intelligence”(Fox 2014). Sometimes informed intuition—an understanding of a situation drawn from education and experience—can tell you more than all the data in the world. • In a complex situation, or when you don’t have access to all the data you’d like, don’t discount the value of a tried and true rule of thumb: Gerd Gigerenzer has documented the importance of simple rules, or heuristics, drawn from the experience of many people, in making decisions in a chaotic world. For example, in investing, the rule of thumb that says “divide your money equally among several different types of investments” usually produces better results than the most complicated investment calculations (Fox 2014). • Also, don’t discount the value of performing even a very crude risk analysis: According to Larry Roth, “the answers to a crude risk analysis may be helpful, in particular if you make a good faith attempt to understand the uncertainties in your analysis, and you vary the input in a sensitivity analysis. A crude analysis should not replace judgment but can help improve your judgment” (pers. comm., November 30, 2018). • Be mindful of the relationship between scope and risk management: Managing risk is closely tied to managing project scope. To ensure that scope, and therefore risk, remains manageable, you need to define scope clearly and constrain it to those elements that can be directly controlled or influenced by the project team. • Understand the consequences: In order to complete a project successfully, stakeholders need to understand the broader implications of the project. The better stakeholders understand the project context, the more likely they are to make decisions that will ensure that the entire life cycle of the project (from the using phase on through retirement and reuse) proceeds as planned, long after the execution phase has concluded. • Assign responsibilities correctly in a RACI chart: A responsibility assignment matrix (RACI) chart must specify one and only one ‘R’ for each task/activity/risk. A common error is a task having no defined owner, or two or more owners. This results in confused delivery at best, or more likely, no delivery ownership. • Beware of cognitive biases: As explained in Lesson 2, cognitive biases such as groupthink and confirmation bias can prevent you from assessing a situation accurately. These mental shortcuts can make it hard to perceive risks, and can cause you to make choices that you may only perceive as unethical in hindsight. • Talk to your supervisor if you think your organization is doing something unethical or illegal: It’s sometimes helpful to raise an issue as a question—e.g., “Is what we are doing here consistent with our values and policies?” For additional guidance, consult the Code of Ethics for Engineers, published by the National Society of Professional Engineers, which is available here: https://www.nspe.org/sites/default/files/resources/pdfs/Ethics/CodeofEthics/Code-2007-July.pdf. • Look for a new job: If your efforts to reform poor risk management or unethical practices in your organization fail, consider looking for a different job. A résumé that includes a stint at a company widely known for cheating or causing harm through poor risk management can be a liability throughout your career. ~Summary • Risk can be a good thing, signaling new opportunity and innovation. To manage risk, you need to identify the risks you face, taking care to distinguish risks from issues. Risks are caused by external factors (such as the price of commodities) that the project team cannot control, whereas issues are known concerns (such as the accuracy of an estimate) that the project team will definitely have to address. Modern organizations face many types of risks, including risks associated with human capital, marketing, compliance, sustainability, and project complexity. A team member’s perception of risk will vary, depending on his or her role and current circumstances. • Successful project managers manage the differing perceptions of risk, and the widespread confusion about its very nature, by engaging in systematic risk management. However, risk management tools can overestimate the accuracy of estimates. In traditional risk management, stakeholders take on as little risk as possible, passing it off to other shareholders whenever they can. A living order approach to risk seeks a more equitable form of risk-sharing that understands some risks emerge over the life of the project. • Different industries face different risks. For example, product development risks are often related to schedules, whereas IT risks are typically related to the complexity of IT projects. One way to manage risks in many industries is to monetize them—that is, assign dollar values to them. Once you’ve monetized a project’s risks, you can rank them and make decisions about which deserves your most urgent attention. • The biggest mistake is failing to perceive risks because you are too narrowly focused on technical issues, and risks you can’t foresee because they involve an extreme event that lies outside normal experience. • The most valuable part of any contingency planning is the thinking that goes into it. Thinking carefully about the risks facing your project, and discussing them with others, is the best way to identify the areas of uncertainty in the project plan. • Decisions about risk inevitably raise ethical questions because the person making the decision is often not the one who will actually suffer the consequences of failure, and because unethical behavior is itself a risk. ~Glossary • black swan event—Term used by Nassim Nicholas Taleb in his book Black Swan: The Impact of the Highly Improbable to refer to the most extreme form of externality. According to Taleb, a black swan event has the following characteristics: it is an outlier, unlike anything that has happened in the past; it has an extreme impact; and, after it occurs, people are inclined to generate a rationale for it that makes it seem predictable after all (2010, xxii). • contingency planning—The development of alternative plans that can be deployed if certain risks are realized (e.g., parts from a supplier do not meet quality requirements). • ethics— According to Merriam-Webster, a “set of moral principles: a theory or system of moral values.” • Integrated Project Delivery—A Lean-oriented contractual arrangement that emphasizes collaboration among all participants from the very earliest stages of the project, and that encourages participants to help solve each other’s problems, taking a proactive approach to risk (Thomack 2018). • issue—A known concern, something a team will definitely have to address. Compare to a risk, which is caused by external factors that the project team cannot fully identify. • monetize risk—To assign a dollar value to the potential impact of risks facing a project. Monetizing risks gives outcomes “real economic value when the effects might otherwise be ignored” (Viscusi 2005). Once you’ve monetized a project’s risks, you can rank them and make decisions about which deserves your most urgent attention. You can also evaluate the cost-effectiveness of steps required to reduce risk. Every industry has its own calculations for monetizing risks, although it is unethical in some industries, especially where public safety is concerned. • Monte Carlo simulation—”A mathematical technique that generates random variables for modelling risk or uncertainty of a certain system. The random variables or inputs are modeled on the basis of probability distributions such as normal, log normal, etc. Different iterations or simulations are run for generating paths and the outcome is arrived at by using suitable numerical computations” (The Economic Times n.d.). • proactive concurrency—Intentionally developing an awareness of options that can be employed in case you run into problems with your original plan. • risk—The probability that something bad will happen times the consequences if it does. The likelihood of a risk being realized is typically represented as a probability value from 0 to 1, with 0 indicating that the risk does not exist, and 1 indicating that the risk is absolutely certain to occur. • risk management—“The process of identifying, quantifying, and managing the risks that an organization faces” (Financial Times). • risk matrix—A risk management tool in which the probability of the risk is multiplied by the severity of consequences if the risk does indeed materialize. • tolerable risk—The risk you are willing to live with in order to enjoy certain benefits. • threat—A potential hazard that could affect a project. A threat is not, in itself, a risk. A risk is the probability that the threat will be realized, multiplied times the consequences. • value of a statistical life—An “estimate of the amount of money the public is willing to spend to reduce risk enough to save one life” (Craven McGinty 2016).
textbooks/biz/Management/Book%3A_Technical_Project_Management_in_Living_and_Geometric_Order_(Russell_Pferdehirt_and_Nelson)/1.08%3A_Managing_Project_Risks.txt
As obvious as it seems, customer value is defined by no one but the customer. — Mark Rosenthal (Rosenthal 2009) Objectives After reading this chapter, you will be able to • Define basic terms such as budget, estimate, price, cost, and value • Discuss the relationship between scope changes and cost and budget overruns • Explain basic concepts related to budgeting • Identify different types of costs, and discuss issues related to contingency funds, profit, and cost estimating • Explain the benefits of target-value design The Big Ideas in this Lesson • The project manager’s biggest job is delivering value as defined by the customer. A more geometric order focus on the project’s budget is also important, but never as important as delivering value as defined by the customer. • Managing value and cost requires constant engagement with the customer, and a mutual understanding of basic terminology like budget and estimate. • When creating an estimate, don’t confuse precision with accuracy. 9.1 Talking the Talk Nearly all projects require money to pay for the required resources—labor, services, and supplies. Project success requires that project managers accurately identify the money needed for a project, acquire the commitment of those funds through a budgeting process, and then successfully manage the expenditure of those funds to achieve the desired outcomes. Your ability to manage stakeholder expectations and commitment related to project funds, combined with your ability to effectively manage the use of those funds to deliver results, will form the basis of your reputation as a reliable project manager. An important step in ensuring that a project unfolds smoothly is to make sure everyone is using similar terminology. Terminology is important in any technical endeavor, but when it comes to a project’s overall value, miscommunications resulting from incorrectly used terms can result in misaligned expectations and erode trust among project participants. Unfortunately, this type of miscommunication is extremely common. So, let’s start with some basic terms: • budget: The funds that have been allocated for a project. • estimate: An assessment of the likely budget for a project. An estimate involves counting and costing and is based on ranges and probabilities. Throughout a project, managers and team members are asked to estimate remaining work, cost at completion, and required remaining time. An estimate is a forward projection, using what is known, to identify, as best as possible, the required effort, time, and/or cost for part or all of a project. • price: “A value that will purchase a finite quantity, weight, or other measure of a good or service” (Business Dictionary). The price of a purchased unit is determined by the market. • cost: “An expenditure, usually of money, for the purchase of goods or services” (Law 2016). Practically speaking, project cost (and its relationship to revenue or approved expenditures) is the thing management cares about most. Note that, like all terms, the meaning of “cost” varies somewhat from industry to industry. For example, in product development, the term has three specific meanings: 1) cost to create the product or project; 2) cost to establish a manufacturing cell capable of producing the product; and 3) cost of the final good or service to the market. • value: “The inherent worth of a product as judged by the customer and reflected in its selling price and market demand” (Lean Enterprise Institute 2014). Project managers have to think about two kinds of value—target value, or the value the stakeholders hope to achieve, and delivered value, the value actually generated by the project. You’ll learn more about target value later in this lesson. The following scenario illustrates use of these related concepts. Suppose you set \$100 as your monthly gas budget at the beginning of the year. However, because the current price of gas is \$5.50 a gallon, which is higher than normal, you estimate that you will actually spend \$130 on gas this month. You won’t know your cost for gas until you make your final purchase of the month and add up all the money you spent on fuel. If you wind up having to take an unexpected out-of-town trip, then your cost could be quite a bit higher than your estimate. Or, if the price of gas drops suddenly, say to \$1.60 per gallon, your cost will turn out to be lower. In any case, the cost is a simple summation you can do looking backwards. But the value of your month of travel is something only you can define. If your unexpected out-of-town trip results in some compelling new business opportunities, then you would likely value the trip very highly. But if the weather prevents you from reaching your destination, and then you get lost on the way home, you would probably assign a low value, as experienced, to your misbegotten adventure. Much like in a project, the delivered value may fall short of the target value. A Word on Price Note that the precise meaning of the term “price” varies from one industry to the next. In a capital project, the term “price” may refer to total price the customer will pay. In a product development project, the term typically refers to the market price for the good or service, and will often fluctuate based on the volume purchased by a particular customer. In the real world, these terms do not always mean the same thing to everyone. Worse, people sometimes use them interchangeably or use them to mean different things in different situations. In particular, the terms budget and estimate are often incorrectly used as synonyms, as are the terms cost and price. The end result of this confusion can be a lack of clarity among project partners regarding project goals and constraints. It helps to keep in mind that a budget, an estimate, a target value, and a price are tools to help guide the project team, whereas cost and delivered value are project outcomes that help determine project success. Budgeting and estimating are tools we use to try gauge cost and create value. But they don’t cause cost. Project cost is driven by scope, required resources to accomplish the scope, and related prices. Delivering value is your primary job as a project manager. But of all the terms related to budgeting a project, the meaning of “value” can be most difficult to grasp. The important thing to remember is that value is determined by the customer, and then flows back to all participants, shaping all project activities. As a project manager, you need to engage with the customer to identify the project’s real value. At the same time, you might also need to take a longer view of value. For example, your organization might be better able to offer value to future customers by carefully studying all projects at completion to capture lessons learned—information that can then be used to improve future projects. The value of this investment of resources may not be apparent to customers, who are only focused on their particular projects, as it mainly benefits future projects. But the overall process benefits all customers in the long run. As you work on the project’s budget, and perhaps face difficult decisions, you need to focus on tasks that create value. According to the Lean Lexicon, a good test for identifying a value-creating task “is to ask if this task could be left out without affecting the product. For example, rework and queue time are unlikely to be judged of any value by customers, while actual design and fabrication steps are” (Lean Enterprise Institute 2014). This article walks through an example of a home construction project that could have turned out much better for the home owners—who were forced to live in a trailer with no running water during the project—if the builder had focused on providing “chunks” of immediately usable value, such as a working bathroom (Lloyd 2015): http://project-management.com/understanding-lean-project-management/. As you’ve learned in earlier lessons, that’s exactly what Agile project management does in the world of IT. At the end of each sprint, the customer is in possession of a piece of working software. Throughout any project, you need to do all you can to get stakeholders to focus on the success of the whole project, and not just their individual parts. One way to do this is to make sure everyone understands what the project value is, and then encourage them to optimize the flow of the project. As the project evolves, the project team should continue to refine its understanding of project value; refine its estimate of required resources; and, if necessary, modify the approved budget or adjust scope so that costs do not exceed the budget. In product development, it is helpful to think of value as an attribute or feature for which customers will pay a premium. Customers may pay more for smaller size, longer life, better aesthetics, or more durable products. Depending on the use of the product being created, these may be more or less important. Susan Ottmann, program director for Engineering Professional Development at the University of Wisconsin-Madison, points out that “Schneider Electric produces two types of load centers for the U.S. market. (A load center is the box in your home that houses circuit breakers.) The QO brand is differentiated from the Homeline brand by a higher level of durability and quality. Although both perform the same function, the technology inside the breakers is slightly different. Presumably, QO has made the calculation that some customers will be willing to pay more for a higher quality product” (pers. comm., June 6, 2018). 9.2 Keeping an Eye on Scope A project’s budget, estimate, and cost are all affected by the project’s scope. Anyone who has ever remodeled a bathroom in an old house is familiar with the way scope can change throughout the course of a project. The fact is, you can’t really know how much the project is going to cost until you tear up the floor and get a look at the state of the old plumbing. Boston’s Big Dig—which was estimated to cost \$2.8 billion, but ultimately cost \$14.6 billion—is a more extreme example of the same principle at work: It is difficult to precisely predict the cost of any endeavor at its outset. A good rule of thumb is to assume that whatever can go wrong probably will go wrong. For example, to return to the remodeling example—rather than naively hoping for the best, you’d be wise to assume that everything old will have to be replaced when you begin pulling up the floor in a fifty-year-old bathroom. Overly optimistic assumptions about risk and scope are a leading cause of unrealistic estimates. Assuming everything will have to be replaced would help set an estimate for the upper bound of a likely range of costs for the project. Estimates should include a range, which can be narrowed as more is learned about actual project conditions. Examples of cost and time overruns are easy to find. Here are just a few sources to give you a sense of the magnitude of the problem, which is especially acute in massive megaprojects: When asked to defend mounting costs, project managers will sometimes argue that the cost increased because the scope evolved, when in fact the real culprit is scope creep. As discussed in Lesson 4, scope evolution, or managed change, is a natural and rational result of the kind of learning that goes on throughout the course of a project. It is a conscious, managed choice caused by externalities that forces you to reconsider project essentials in order to achieve the intended project value. Scope creep, by contrast, is caused by unmanaged changes to the project scope. It might add value from the customer’s perspective, but the time, money, and resources consumed by the change of scope lead to additional overruns. Scope creep tends to happen when no one is paying attention to the project’s scope. The key to managing scope changes is a process for early identification, review, and approval of requested changes to project scope. A Scope Change Request—or a Project Variation Request (PVR) as it is sometimes called— is a form that must be signed by all affected stakeholders prior to implementation. This article by Tom Mochal provides some helpful guidelines for managing scope changes: http://www.techrepublic.com/article/follow-this-simple-scope-change-management-process/. You can download a sample Scope Change Request form here: www.demo.projectize.com/pmf/templates/63.doc 9.3 Understanding Budgets Precision versus Accuracy Can a price be precise but not accurate? Yes. You might calculate a price down to the penny, but if you’re wrong, you’re not accurate. Engineers tend to focus on precision at the expense of accuracy. But accuracy is far more useful. And remember, you can never be more precise than the least precise line item (Nelson 2017). Budgeting is an exercise in refining your focus. You start with a wide-angle estimate, in which the details are necessarily fuzzy, and bit by bit zero in on a sharper picture of project costs. You might be temperamentally inclined to try to nail down every figure in an early draft of a budget, but in fact you should only develop a budget at the precision needed for current decisions. Your overall precision can and should advance as the project advances. This is especially important in the earliest stages of the budgeting process, when you are working out rough estimates. Take care to estimate at the appropriate level of precision: Don’t make the mistake of thinking you can estimate costs to the exact penny or dollar. \$378,333.27 is not a realistic or intelligent estimate. Ultimately, overly precise budgets represent a communication failure. By proposing a budget to the customer that contains overly precise figures, you risk giving a false sense of accuracy regarding your understanding of and knowledge about the project. In the early stages of the budgeting process, when you are still working out estimates, it’s helpful to include an uncertainty percentage. A typical approach is to include a +/- percentage, such as \$400,000 +/- 10%. The percentage may initially be large but should gradually decrease as the project progresses and the level of uncertainty declines. For IT projects, which are notoriously difficult to estimate, consider going a step further and adding an uncertainty percentage to every line item. Some items, such as hardware, might be easy to estimate. But other items, such as labor to create new technology, can be extremely difficult to estimate. These line item variances can influence the total estimate variance by a significant amount in many projects. But even when you have a final budget in hand, you need to prepare for uncertainty by including an official contingency fund, which is a percentage of the budget set aside for unforeseen costs. Contingency funds are described in more detail later in this lesson. Successful project managers use the budgeting process as a way to create stakeholder buy-in regarding the use of available resources to achieve the intended outcome. By being as transparent as possible about costs and resource availability, you’ll help build trust among stakeholders. By taking care to use the right kinds of contracts—for example, contracts that don’t penalize stakeholders for escalating prices caused by a changing economy—you can create incentives that keep all stakeholders focused on delivering the project value, rather than merely trying to protect their own interests. The relationship between costs and contracts is discussed in more detail later in this lesson. This blog post by Tim Clark includes some helpful tips on creating a project budget: https://www.liquidplanner.com/blog/7-ways-create-budget-project/. 9.4 Understanding Cost Ultimately cost, the number management typically cares about most in a for-profit organization, is determined by price. For many projects, it’s impossible to know the exact cost of an endeavor until it is completed. Stakeholders can agree on an intended value of a project at the beginning, and that value has an expected cost associated with it. But you may not be able to pin down the cost more precisely until you’ve done some work on the project and learned more about it. To estimate and manage costs effectively, you need to understand the different types of costs: • direct costs: “An expense that can be traced directly to (or identified with) a specific cost center or cost object such as a department, process, or product” (Business Dictionary n.d.). Examples of direct costs include labor, materials, and equipment. A direct cost changes proportionately as more work is accomplished. • direct project overhead costs: Costs that are directly tied to specific resources in the organization that are being used in the project. Examples include the cost of lighting, heating, and cleaning the space where the project team works. Overhead does not vary with project work, so it is often considered a fixed cost. • general and administrative (G&A) overhead costs: The “indirect costs of running a business,” such as IT support, accounting, and marketing” (Investing Answers n.d.). The type of contract governing your project can affect your consideration of costs. As explained in Lesson 4, the two main types of contracts are fixed-price and cost-plus. Fixed price is the more predictable of the two with respect to final cost, which can make such contracts appealing to the issuing party. But “this predictability may come with a price. The seller may realize the risk that he is taking by fixing a price and so will charge more than he would for a fluid price, or a price that he could negotiate with the seller on a regular basis to account for the greater risk the seller is taking” (Symes 2018). Many contracts include both fixed-price and cost-plus features. For example, they might have a fixed price element for those parts of the contract that have low variability and are under the direct control of the project team (e.g., direct labor) but have variable cost elements for those aspects that have a high degree of uncertainty or are outside the direct control of the project team (e.g., fuel costs or market driven consumables). Contingency Funds If money is not available from other sources, then cost overruns typically result in a change in the project’s scope or a reduction in overall quality. To prevent this, organizations build contingency funds into their budgets. Technically, a contingency fund is a financial reserve that is allocated for identified risks that are accepted and for which contingent or mitigating responses are developed. The exact amount of a contingency fund will vary, depending on project risks; a typical contingency fund is 10% to 15% of the total budget but depends on the risks associated with the project. From the Trenches: John Nelson on Cost Planning and Living Order John Nelson summarizes his thoughts on cost planning, based on his decades of work on capital projects, as follows: Conceptual planning takes place in living order. Cost management, when done right, starts out in living order, but moves into a very strict geometric order. Unfortunately, it is rarely done right. Between 2/3 and 3/4 of all projects worldwide end up costing more than originally planned. Getting the costs wrong during the planning stage can result in huge consequences for a project, and possibly for your career. Major cost busts can follow you around for the rest of your working life. If you cost something incorrectly, you’ll have to make corresponding downgrades in scope and quality. For example, many college campuses have new buildings with two or three empty floors because the money ran out before they could be finished. You really don’t want to be the project manager responsible for a costing error of that magnitude, which is sometimes referred to as a CLM, or career limiting move. Even worse, companies that get costs wrong and underbid a project sometimes try to salvage a profit by illegal means—perhaps by using cheap materials or cutting corners on safety. On public projects, such as highways or schools, huge costing errors can result in loss of public trust, making it more difficult for the public agency to do more work in the future. In that case, a cost bust can be an OLM—an organizational limiting move. Accurately and precisely predicting the cost of a project is very difficult. You need to start with humility and curiosity, expending a great deal of effort to get the numbers right, especially when it comes to parts of the project you don’t understand. This is true for small projects, like a bathroom renovation in an old house, where you simply don’t know what you’re going to find until you start opening up the walls. It’s also proven true for huge undertakings like the Big Dig, Boston’s tunnel megaproject, which ended up with a cost overrun of 190%. (2017) Contingency funds are often available to pay for an agreed-upon scope change. However, some project managers make a practice of treating a contingency fund as a “Get Out of Jail Free” card that they can use to escape any cost limitations. Some, as a practical matter, will artificially inflate a contingency fund to ensure that they have plenty of resources to draw to manage any unforeseen future risks. But that is never a good idea because if you wind up with a large contingency fund that you ultimately don’t spend, you have essentially held that money hostage (i.e., lost opportunity costs) from the rest of the enterprise. That can be as damaging to your organization’s mission as a cost overrun that prevents you from finishing a project. This excellent article, published by the Australian firm Broadleaf Capital International, discusses the issues and tradeoffs involved in contingency funds: http://broadleaf.com.au/resource-material/project-cost-contingency/. As explained in Lesson 8, contingency funds are a form of risk management. They are a necessary tool for dealing with uncertainty. Unfortunately, as necessary as they are, it’s not always possible to build them into your approved budget. For example, if you are competitively bidding on a contract that will be awarded on the lowest cost, then including a contingency fund in your estimate will almost certainly guarantee that your company won’t win the contract. It is simply not practical to include a contingency fund in a lump sum contract. In the living order approach to this problem, the owner maintains a shared contingency fund instead and makes it available, upon justification, for all project stakeholders. This approach helps ensure that project participants will work collaboratively with the project sponsor to solve any problems they might notice, confident that there is money available to address problems that threaten project value or to leverage opportunities that will provide greater project value. For example, in a lecture on Lean and integrated project delivery, David Thomack, a long-time veteran of the construction industry, explained how the Boldt Company and other stakeholders involved in a \$2 billion healthcare project protected millions of dollars in contingency funding, which was then ultimately shared among all stakeholders (Thomack 2018). Such shared contingency funds are typically spelled out in the project contract and are an effective tool to manage risk and uncertainty. Although some organizations only manage out-of-pocket project costs, best practice is to manage total cost, including costs associated with staff (engineering, purchasing, testing, etc.) working on the project. Profit In private enterprise, cost management is directed toward maximizing profit. A private or publicly-traded organization cannot stay in business unless they are profitable. But that doesn’t mean that every project is primarily a profit-maximizing undertaking. Certainly, individual projects (such as developing a new product or completing a design for a client) may have a goal of generating profit. However, some projects (such as deploying an enterprise software system or meeting a regulatory compliance requirement) may not in themselves generate profit, but rather support the broader organization in generating profits. Within governmental and non-profit organizations, projects are not designed to generate profits but might be launched to reduce costs or generate net revenues that can be used to cover other costs within the organization. As a project manager, you need to understand the financial expectations for your projects. Make sure you know how the financial performance of your project affects the financial performance of the associated project portfolio and the overall organization. This understanding will help you advocate for your proposed project. It will also enable you to better justify changes to the project’s scope and budget, based on the project’s proposed value. As a general rule, chasing profits at the expense of both your organization’s larger mission and the value your organization wants to offer to customers is not sustainable. A relentless focus on profit alone can wreak havoc on a project as project managers are forced to reduce quality or slow the schedule to meet a carved-in-stone budget that will supposedly ensure profitability. In such situations, however, profitability is nearly always defined in the short-term. A fixation on short-term profits can, paradoxically, lead to spiraling losses in the long term—perhaps because unsatisfied customers take their business elsewhere. Likewise, chasing excessive quality or accelerated schedules can be equally elusive. Ideally, some kind of financial metric is associated with the success of any project and is spelled out in the contract. A collaborative approach to contracts and procurement helps keep all stakeholders focused on the project’s intended value rather than simply on short-term profits. Cost Estimating Estimating costs accurately is essential to any organization’s success. In fact, in many industries, the knowledge involved in cost estimating is actually a valuable form of intellectual property. The ability to estimate costs is part of a company’s overall competitive advantage and a skill required in most industries. There are two basic types of estimating: • More on Estimating For clarification on the difference between top-down and bottom-up estimating, see this blog post, by Andy Makar: https://www.liquidplanner.com/blog/how-long-is-that-going-to-take-top-down-vs-bottom-up-strategies/. For a complete discussion of cost estimating, see Chapter 5 of Project Management: The Managerial Process, by Erik W. Larson and Clifford F. Gray. top-down estimates: Estimates that “usually are derived from someone who uses experience and or information to determine the project duration and total cost. However, these estimates are sometimes made by top managers who have very little knowledge of the component activities used to complete the project” (Larson and Gray 2011, 134). A top-down estimator generates an expected total for the entire project and then divides up that total among the various project tasks. • bottom-up estimate: “A detailed cost estimate for a project, computed by estimating the cost of every activity in a work breakdown structure, summing these estimates, and adding appropriate overheads” (Business Dictionary n.d.). A bottom-up estimator divides the project into elements and tasks, estimates a cost for each, and then sums all estimated costs to create a total for the project. A common problem with simple bottom-up estimates is that they often overestimate costs that cannot be justified by market conditions. Total projected costs need to be compared with market realities, and task estimates of planned work and associated costs may have to be adjusted to reach a feasible budget for the overall project. Note that pressure to make such adjustment can encourage the sponsor to try to make the numbers work any way possible, perhaps by overstating the benefits of the project (e.g., higher sales volume than the market forecast predicts) or planning for the project team to do more work faster than is realistic. Ultimately, this is an ethical issue and could end up costing your reputation. It’s essential that you remain truthful about the realities of your projects as you estimate their costs. A third type, iterative estimating, combines the best of top-down and bottom-up estimating. Iterative estimating is a process of refining an estimate by taking into account information typically used in a top-down estimate (such as past history of similar projects) and detailed information generated by bottom-up estimating. Iterative estimating takes place in living order and relies on negotiation and coordination among the project stakeholders. It only works if past work is representative of future work, which you can really only determine if you are producing small batches. One type of iterative estimating, phase estimating, is “used when the project is large or lengthy or is developing something new or untried for the organization. In phased estimates, the near-term work is estimated with a high level of accuracy, ±5 – 15%, whereas future work is estimated at a high level with ±35% accuracy” (Goodrich n.d.). As the project advances through major phases, the budget for subsequent phases is intentionally reviewed and refined in light of knowledge gained to date. According to David Pagenkopf, IT project managers use yet another type of estimating called parametric estimating, which is a way to “use experience from parts of other projects to come up with estimates for work packages that are similar to past work, but not the same.” For example, he explains that “if a ½ ton Ford pick-up gets 20 mpg on the highway then I can estimate that a ½ ton GMC pick-up may get 20 mpg on the highway. That information may be helpful in determining the entire cost of a trip that involves the use of multiple rented trucks. Actual mileage will vary, but without testing the GMC truck and collecting data, I can reasonably estimate mpg for it” (pers. comm. June 1, 2018). 9.5 Target-Value Design Despite all the effort organizations put into cost management, cost overruns remain a fact of life. For example, in a study of 1,471 IT projects, Flyvbjerg and Budzier found The Chaos Report An interesting source of data on the general health of projects is the annual Chaos Report from the Standish Group. Although aimed at the IT industry, it can be extrapolated to other types of projects to show general performance on projects in other industries. The most recent version of the report requires paid access, but you can find earlier versions online for free, such as this copy of the 2014 report: www.projectsmart.co.uk/white-papers/chaos-report.pdf. The average overrun was 27%—but that figure masks a far more alarming one. Graphing the projects’ budget overruns reveals a “fat tail”—a large number of gigantic overages. Fully one in six of the projects we studied…[had] a cost overrun of 200%, on average, and a schedule overrun of almost 70%. This highlights the true pitfall of IT change initiatives: It’s not that they’re particularly prone to high cost overruns on average, as management consultants and academic studies have previously suggested. It’s that an unusually large proportion of them incur massive overages. (2011) Cost overruns occur for many reasons, including lack of sufficient knowledge about the project, inability to obtain funding for the full scope of the desired work, uncertainty about the feasibility of the project, and conflicting priorities. Using only the traditional, geometric approach to cost management fails to encourage broad on-going stakeholder engagement and collaboration that can prevent these problems. You will get far better results by incorporating the living order principle of target-value design, a cornerstone of Lean project delivery in the construction field which has applications in nearly all areas of project management. A target value is the output stakeholders want the project to generate. Target-value design focuses on creating the best possible target value for the customer without exceeding the project’s target costs. It requires a fundamental shift in thinking from “expected costs” to “target cost.” The target-value design process is collaborative and iterative, typically requiring frequent refinement and conversation among project stakeholders. For a quick, thirty-second introduction to target-value design, see this video: A YouTube element has been excluded from this version of the text. You can view it online here: pb.libretexts.org/web1/?p=88 In the traditional budget process, the estimate is based on a detailed design and project plan. In target-value design, you start with the intended value of the project, and then design and plan a project that will deliver the intended value for the targeted cost. In other words, the project’s value and associated budget determines the project’s design, and not the other way around. This is nothing new for product development teams, who nearly always have to design their products for particular price points. But the degree of engagement with the customers required in target value design to find out what customers really want is not something most product development teams are used to. In any industry, the goal of target-value design is hitting the sweet spot of what you can get for the right price, schedule, and quality. For example, the whole point of Agile software development is continually refocusing the project in an attempt to achieve the desired target-value. Thinking About Value According to John Nelson, you can’t get the costs right on a project until you understand what the customer values. To do that, you need to understand what value really means: We make value-based decisions all the time. But different people value different things. For instance, in Wisconsin, you might choose to drive an inexpensive car, so you don’t have to worry about it getting damaged on icy winter roads. A realtor, who has to drive clients around in her car, might choose a more comfortable, expensive vehicle. There’s no right or wrong in these decisions. You’re both right. Keep in mind that a moral value is different from a project value. Moral values are about right and wrong. Project value is concerned with the worth of something—and the only person who can determine that is the customer. The only time an engineer can object to the customer’s definition of project value is when the customer asks for something that is a threat to human safety or is illegal. When costing a project, you need to figure out what your customer’s value threshold is. You don’t want to build the best thing ever built if that’s not what they want. So, the first step in the target value process is to get the customer to explain her definition of value. To do that, you need to have open conversations in which you keep asking questions, all the while making it clear you are eager to learn what the customer wants. At this stage, it’s essential to resist the temptation to over-promise what you can deliver. One way to avoid this is to continually engage with the customer about value, budget, and schedule. (2017) According to John Nelson, in capital projects, a target value cost model is “a framework of estimates and budgets that changes over time” (2017). The process entails “many conversations about price, cost, budget, and estimate, and at the same time discussions with the customer about what they really value.” When done right, it transforms “cost management from a calculation performed in isolation by professional estimators, to a process of ongoing, collaborative learning about the project in which team members and the customers all have a role. It avoids the pitfall of having one person responsible for calculating a total cost, and another person responsible for delivering the project at that number” (2017). The ultimate goal of target-value design is to reduce the waste and rework that normally arises in the design/estimate/redesign cycle. It necessarily involves cross-functional teams because no one party in isolation has the necessary knowledge to define project value and develop a project plan that most efficiently delivers that value. Target-value design integrates the definition of the project’s product/deliverables with the process used to deliver the project and with the associated project costs. To help you implement target-value design in your organization, the Lean Construction Institute recommends nine “foundational practices.” These principles apply to all types of projects: 1. Engage deeply with the client to establish the target-value. Both designers and clients share the responsibility for revealing and refining concerns, for making new assessments of what is value, and for selecting how that value is produced. Continue engaging with the client throughout the design process to uncover client concerns. 2. Lead the design effort for learning and innovation. Expect the team will learn and produce something surprising. Establish routines to reveal what is learned and innovated in real-time. Also expect surprise will upset the current plan and require more replanning. 3. Design to a detailed estimate. Use a mechanism for evaluating design against the budget and the target values of the client. Review how well you are achieving the targets in the midst of design. When budget matters, stick to the budget. 4. Collaboratively plan and re-plan the project. Use planning to refine practices of coordinating action. This will avoid delay, rework, and out-of-sequence design. 5. Concurrently design the product and the process in design sets. Develop details in small batches (lot size of one) in tandem with the customers (engineer, builders, owner, users, architect) of the design detail. Adopt a practice of accepting (approving) completed work as you design. 6. Design and detail in the sequence of the customer who will use it. This maintains attention to what is valued by the customer. Rather than doing what you can do at this time, do what others need you to do next. This leads to a reduction in negative iterations. 7. Work in small and diverse groups. Learning and innovation arises socially. The group dynamics of small groups—8 people or less—is more conducive to learning and innovating: trust and care for one another establish faster; and communication and coordination are easier. 8. Work in a big room. Co-locating design team members is usually the best option. Design is messy. Impromptu sessions among design team members are a necessary part of the process. So are regular short co-design sessions among various specialists working in pairs. 9. Conduct retrospectives throughout the process. Make a habit of finishing each design cycle with a conversation for reflection and learning. Err on the side of having more retrospectives not fewer. Use plus|deltas at the end of meetings. Use more formal retrospectives that include the client at the end of integration events. Instruct all team members to ask for a retrospective at any time even if they just have a hunch that it might uncover an opportunity for improvement. (Macomber and Barberio 2007) Costs in Practice John Nelson’s work on the Discovery Building at the University of Wisconsin-Madison included an interesting example of the kind of value trade-off that occurs in target value design: It turned out that the owner expected extremely high-quality lighting in the building, which put pressure on the target value for electrical. To that, we said, “Ok, we can increase the target value for electrical, but we can’t increase the project budget. So what system will we offset that with?” In the end, we used a flat slab concrete structure system that allowed us to take four feet out of the height of the building. We also used digital-integrated design, designing the entire building in AutoCAD, working out all the interferences before we went in the field. Taking four feet out of the height of the building allowed the skin price to come down, which offset the cost of the higher quality lighting. This is an example of the kind of give-and-take required to manage costs. Value is at the heart of target-value design and is ultimately defined by the client. However, as a project manager, it’s sometimes your job to expand the client’s notion of what constitutes a project’s overall value, perhaps by encouraging the client to think about the project’s entire life cycle, from planning, to construction/manufacturing/implementation, to operation and support, and to product or facility retirement/ decommissioning. So, what does target-value design look like in practice? Appendix A in The Design Manager’s Handbook (which is available online here: http://onlinelibrary.wiley.com/doi/10.1002/9781118486184.app1/pdf) includes some helpful examples (Mossman, Ballard and Pasquire 2013). Figure 9-1, created by The Boldt Company, provides a graphical representation of key milestones in a target-value design project. Figure 9-1: Key milestones in a target-value design project (Source: The Boldt Company) This diagram shows that: • The target cost was set at \$13,100,000 by board approval. • The initial estimated costs, exclusive of contingencies, were slightly above the target cost. • The design was modified to enable estimated costs, including contingencies, to approach the target cost. • The final design included owner-initiated changes that were covered by contingency allowances. As the project advanced, contingency funds were used as needed, although they were reduced as the team managed actual costs to align with estimates, with the goal of keeping total costs within the target budget. • Unused contingency funds were available at the end to share among project partners. • Throughout the project, participants took care to check in on the project’s scope, cost, and schedule. In any project, it’s essential to have some process for defining the project scope, identifying potential scope changes, and identifying the cost and schedule tradeoffs necessary to make those changes possible. ~Practical Tips • When it comes to project costs, don’t try to be all things for all customers: Some organizations do better on low-cost projects, others on high-cost projects. Few can do both. If discussions about value during the planning stage tell you that the customer has a Mercedes appetite with a Chevrolet wallet, then you probably don’t want to work for that customer because you won’t be able to please them. • Be prepared to learn: Throughout a project, you move along on a continuum of living order to geometric order, where things get more predictable as you proceed. But you never know the total cost of a project until it’s finished. • Engage stakeholders throughout the budgeting process: It’s essential to keep the conversation going with stakeholders as you make trade-offs, so the stakeholders own all the value decisions • Don’t shrug off a costing mistake by saying I could only estimate what I knew: To the customer, that means I didn’t know enough. Or worse, I didn’t take the time to learn enough. Be honest and humble about what you do and do not know about costs at any given point, avoid giving the impression that you know more than you do, and never be more precise than is justified. • Avoid the jargon guaranteed maximum price with qualifications: This phrase, which is very common in the construction industry, is an oxymoron. Something can’t be both guaranteed and qualified. • Cultivate informed intuition: Developed through experience, informed intuition can be a huge help in estimating. Your informed intuition will improve as you repeat similar projects. In the early stages of your career, seek out mentors who can help speed up your acquisition of informed intuition. • Don’t make the mistake of waiting to look at costs, budgets, and estimates until you reach a milestone: At that point it’s usually too late. To avoid surprises, check in with the numbers throughout the project. Strive to get the big things right at the beginning, using informed intuition for unknowns. Throughout the project, be prepared to adjust, reset, or stop proactively if a budget bust or estimate overrun seems likely. • Remember that production/construction costs are not the end of the story: You also need to be upfront about the difference between the production/construction costs, and the total cost of ownership. For example, the total cost of ownership for an engine would include maintenance and replacement parts. In capital projects, this includes fees, furniture, and contingency funds, which can add 30% to 40%. In IT, the life-cycle cost includes maintenance, which is typically 20% of the purchase price. • Understand the difference between costs in public and private domains: Sometimes, in the public domain, in a very rigid design-bid-build situation, you are given a number for the amount of money you are able to spend. In that case, you simply have to design the project to meet that number. That’s not target valuing. That’s reverse engineering to meet a specific cost at a minimum level of quality. • Be realistic about your level of uncertainty: At all times, avoid misleading stakeholders into thinking your current level of accuracy is higher than it actually is. Be honest about the fact that the project team’s ability to be accurate will improve over time, as the team gains more information about the project. • Learn about the financial environment in which your project will unfold: Make sure you understand the financial planning methods of your business. In some companies, costs of test facilities are considered overhead and are part of general and administrative fixed costs. In other companies, the same internal costs are charged directly to each individual project on a “per use” basis. This can drastically affect final project cost viability. Understanding how your company allocates costs and what needs to be included in the project budget is essential for good planning. A best practice way to do this is to have your project plans and budgets audited by a project manager experienced in the company processes. • Manage contingency funds at a project level: In the same way that a gas expands to fill the available space, spending expands to match the available budget. For this reason, contingency is best managed at a project level, not a task level. • Create a shared contingency fund: Whenever possible, create a shared contingency fund for the project, so that all stakeholders benefit from staying on budget or are hurt by cost overruns. • Remember that, in product development, a lower-than-expected volume can affect profitability: In product development, the cost of a product at launch is often higher than expected due to lower volumes. This may impact profitability. Make sure your team understands the path to reaching the target cost at the target volume with contingencies if anticipated volumes are not attained. This is especially true in industries with high fixed costs and low manufacturing costs, such as the pharmaceutical industry. • Think about possible tradeoffs: Scope, costs, and schedule will typically change as a project advances. As project circumstances evolve, keep asking yourself, “What trade-offs can my team make that are in the project’s best interests?” • Be prepared to work with a predefined budget: A budget negotiation process in which the team is free to discuss the project and make suggestions is ideal, but sometimes an organization’s leader creates a budget for a project, and the assigned team is charged with making it work one way or the other. In that case, you will need to assess the feasibility of achieving the project’s goals with the assigned budget and either: 1) lead the team in developing an appropriate project strategy and plan; or 2) negotiate with the project sponsor to modify the scope and/or budget to enable your team to confidently commit to delivering the project’s value. ~Summary • Terminology is important in any technical endeavor, but when it comes to a project’s overall value, miscommunications resulting from incorrectly used terms can result in misaligned expectations and erode trust among project participants. Make sure you understand the difference between budgets and estimates, and the difference between price and cost. Of all the terms related to budgeting a project, the meaning of “value” can be especially difficult to grasp. The most important thing to remember is that value is determined by the customer and then flows back to all participants, shaping all project activities. Delivering value is your primary job as a project manager. • A project’s budget, estimate, and cost are all affected by the project’s scope. When asked to defend mounting costs, project managers will sometimes argue that the cost increased because the scope evolved, when in fact the real culprit is scope creep. The key to managing scope changes is a Scope Change Request—or a Project Variation Request (PVR) as it is sometimes called—which is a form that must be signed by all affected stakeholders prior to implementation. • Budgeting is an exercise in refining your focus. You start with a wide-angle estimate, in which the details are necessarily fuzzy, and bit by bit zero in on a sharper picture of project costs. Take care to estimate at the appropriate level of precision: Don’t make the mistake of thinking you can estimate costs to the exact penny or dollar. Successful project managers use the budgeting process as a way to create stakeholder buy-in regarding the use of available resources to achieve the intended outcome. By being as transparent as possible about costs and resource availability, you’ll help build trust among stakeholders. • To estimate and manage costs effectively, you need to understand the different types of costs, including direct costs, direct project overhead costs, and general and administrative (G&A) overhead costs. The type of contract (for example, fixed-price versus cost-plus) governing your project can affect your consideration of costs. If money is not available from other sources, then cost overruns typically result in a change in the project’s scope or a reduction in overall quality. To prevent this, organizations build contingency funds into their budgets. The exact amount of a contingency fund will vary, depending on project risks; a typical contingency fund is 10% to 15% of the total budget but depends on the risks associated with the project. Shared contingency funds can encourage stakeholders to focus on the well-being of the project as a whole rather than their individual stakes in the project. • As a project manager, you need to understand the financial expectations for your projects. In private enterprise, cost management is directed toward maximizing profit. But that doesn’t mean that every project is primarily a profit-maximizing undertaking. Within governmental and non-profit organizations, projects are not designed to generate profits but might be launched to reduce costs or generate net revenues that can be used to cover other costs within the organization. A collaborative approach to contracts and procurement helps keep all stakeholders focused on the project’s intended value, rather than simply on short-term profits. Estimating costs accurately is also essential to any organization’s success, and you should be familiar with the two basic types of estimates—top-down estimates and bottom-up estimates—as well as iterative estimates, which combine the best features of top-down and bottom-up estimates. • Target-value design, a cornerstone of Lean project delivery in the construction field, has applications in nearly all areas of project management. Target-value design focuses on creating the best possible value for the customer without exceeding the project’s target costs. It requires a fundamental shift in thinking from “expected costs” to “target cost.” The target-value design process is collaborative and iterative, typically requiring frequent refinement and conversation among project stakeholders. The ultimate goal of target-value design is to reduce the waste and rework that normally arises in the design/estimate/redesign cycle. ~Glossary • bottom-up estimateDetailed cost estimate for a project, computed by estimating the cost of every activity in a work breakdown structure, summing these estimates, and adding appropriate overheads” (Business Dictionary n.d.). A bottom-up estimator starts by dividing the project up into tasks, then estimates a cost for each task, and sums the total costs for all the project tasks. • budget—The funds that have been allocated for a project. • contingency fund—A financial reserve that is allocated for identified risks that are accepted and for which contingent or mitigating responses are developed. Contingency funds are also often available to pay for an agreed-upon scope change. • cost—“An expenditure, usually of money, for the purchase of goods or services” (Law 2016). Note that, like all terms, the meaning of “cost” varies somewhat from industry to industry. For example, in product development, the term has three specific meanings: 1) cost to create the product or project; 2) cost to establish a manufacturing cell capable of producing the product; and 3) cost of the final good or service to the market. • direct costs—“An expense that can be traced directly to (or identified with) a specific cost center or cost object such as a department, process, or product” (Business Dictionary n.d.). Examples of direct costs include labor, materials, and equipment. A direct cost changes proportionately as more work is accomplished. • direct project overhead costs— Costs that are directly tied to specific resources in the organization that are being used in the project. Examples include the cost of lighting, heating, and cleaning the space where the project team works. Overhead does not vary with project work, so it is often considered a fixed cost. • estimate—An assessment of the likely budget for a project. An estimate involves counting and costing and is based on ranges and probabilities. Throughout a project, managers and team members are asked to estimate remaining work, cost at completion, and required remaining time. An estimate is a forward projection, using what is known, to identify, as best as possible, the required effort, time, and/or cost for part or all of a project. • general and administrative (G&A) overhead costs—The “indirect costs of running a business, such as IT support, accounting, and marketing” (Investing Answers n.d.). • iterative estimating—A combination of top-down and bottom-up estimating, which involves constant refinement of the original estimate by taking into account information typically used in a top-down estimate (such as past history of similar projects) and increasingly detailed information generated by bottom-up estimating. • parametric estimating—A way to use experience from parts of other projects to come up with estimates for work packages that are similar to past work but not the same. • phase estimating—A type of iterative estimating that is “used when the project is large or lengthy or is developing something new or untried for the organization. In phased estimates, the near-term work is estimated with a high level of accuracy ±5 – 15% whereas future work is estimated at a high level with ±35% accuracy” (Goodrich n.d.). As the project advances through major phases, the budget for subsequent phases is intentionally reviewed and refined in light of knowledge gained to date. • priceA value that will purchase a finite quantity, weight, or other measure of a good or service” (Business Dictionary). • Project Variation Request (PVR)See Scope Change Request. • Scope Change Request—A document that describes a proposed scope change, including its potential benefits and the consequences of not implementing the change. A Scope Change Request must be signed by all affected stakeholders prior to implementing a scope change. Also known as a Project Variation Request (PVR). • scope creep—Changes to a project’s scope without any corresponding changes to the schedule or cost. The term is typically applied to changes that were unapproved or lacked sufficient knowledge about the project and potential assessment of risks and costs when they were approved. Simply put, scope creep is unmanaged change. • scope evolution— An alteration to the project scope that occurs as the project participants learn more about the project. Scope evolution results in an official change in the project scope, and therefore to the project budget or schedule, as agreed to by all project participants. In other words, scope evolution is managed change. • target value—The output stakeholders want the project to generate. • target-value design—A design process that focuses on value as defined by the customer, with the project’s overall design involving stakeholder engagement and collaboration. • top-down estimates—Estimates that “usually are derived from someone who uses experience and or information to determine the project duration and total cost. However, these estimates are sometimes made by top managers who have very little knowledge of the component activities used to complete the project” (Larson and Gray, 134). A top-down estimator generates a total for the entire project and then divides up that total among the various project tasks. • value—“The inherent worth of a product as judged by the customer and reflected in its selling price and market demand” (Lean Enterprise Institute 2014).
textbooks/biz/Management/Book%3A_Technical_Project_Management_in_Living_and_Geometric_Order_(Russell_Pferdehirt_and_Nelson)/1.09%3A_Managing_Project_Value_Budgets_and_Costs.txt
Limitless material resources are not only unavailable most of the time, they may actually be a hindrance. And remaining lean and mean can often be a blessing. —Michael Gibbert, Martin Hoegl, and Liisa Välikangas (In Praise of Resource Constraints 2007) Objectives After reading this chapter, you will be able to • Discuss basic concepts related to resource management, including over-commitment and over-allocation • Explain some geometric-order resource optimization techniques, including resource leveling, resource smoothing, and the critical chain approach • Describe challenges related to resource allocation in Agile • Discuss ideas related to sustainability and the triple bottom line • Identify issues affecting resource management at the portfolio level • List some advantages of resources constraints The Big Ideas in this Lesson • You have to have the right resources at the right time. This involves staying flexible and making changes as necessary, rather than hewing to a predefined structure that might not be useful as conditions change. • Constraints on resources are inevitable, but by combining geometric-order techniques such as resource leveling and resource smoothing with an adaptive, living order approach, you can prevent project crises. • Project managers have to look outside their projects to find the resources they need. Somehow, within their organizations, they have to secure the resources they need. Meanwhile, on the portfolio-level, executives have to manage resources to ensure that they are available for many projects over the long term. 10.1 Managing Resources in Living Order The most detailed schedules and budgets in the world are useless if you don’t have the people, equipment, facilities, and other resources you need, when you need them. In reality, the schedule is only determined after the resources have been assigned. In other words, until you have assigned and committed resources, your project schedule and budget are not fully realized. They are based on assumptions, which are a huge source of uncertainty. This is especially true in the IT world, where productivity can vary so much from one person to another. You can’t really have a clear idea of how fast your team can work until you know who’s on the team. Acquiring project resources usually involves looking outside the boundaries of the project itself to find what you need. In the early stages, that includes finding the right people for your project team. Inevitably, you will face restrictions on the resources available to you. And yet, to complete a project successfully, you have to figure out how to get the resources you need—people, office space, Internet bandwidth, computer, copper wire, shingles, 3-D modeling equipment, concrete, and so on—when you need them. That’s why understanding the principles of resource allocation is so essential to successful project management. Most definitions of “resource allocation” describe it as something that takes place on the organization level, as in the following: “Resource allocation is the process of assigning and managing assets in a manner that supports an organization’s strategic goals” (Rouse n.d.). On the project level, resource allocation still involves making choices that support the organization’s strategic goals, but you also have to factor in your project’s more specific goals. In all cases, resource allocation (or resource management as it is sometimes called) includes “managing tangible assets such as hardware to make the best use of softer assets such as human capital. Resource allocation involves balancing competing needs and priorities and determining the most effective course of action in order to maximize the effective use of limited resources and gain the best return on investment” (Rouse n.d.). Resource management is about making sure you have the resources you need at the right time, but it’s also about avoiding stockpiling resources unnecessarily (and therefore wasting them) and about “making sure that people are assigned to tasks that will keep them busy and not have too much downtime” (Business Dictionary n.d.). The essence of resource allocation is resource loading, or the process of assigning resources (most often people) to each and every project activity. In resource loading, we look at the tasks involved in the project, and then, using past experience and some judgment, determine how much work (typically measured in person hours) to assign to each resource in order to achieve the desired schedule. In the early stages of a project, resource loading provides a quick check on resource demand and supply. Any indication that demand is tight for a particular resource should serve as a warning that you will have to carefully monitor that resource throughout the project. In any resource loading decision, you need to distinguish between fixed resources (which remain “unchanged as output increases”) and variable resources (which change “in tandem with output”) (Reference n.d.). The Screwdriver Rule Investing in resources you might need, but don’t necessarily need immediately, is similar to keeping screwdrivers of varying sizes available in your tool drawer at home. You would never want to have to go out and buy a screw driver just to complete a quick task like tightening the legs on a chair. Most people would agree that the cost of buying and storing a set of screw drivers is less than the inconvenience of not having them on hand when you need them. In the same vein, a project manager might make a similar value judgement about resource availability to ensure that the project as a whole progresses smoothly. But of course you don’t ever want to unnecessarily stockpile resources that could be used elsewhere in your organization. The geometric order approach to resource allocation presumes a systematic process, in which you know well in advance which resources you’ll need at any one time and have a clear path to acquiring those resources. This is the ideal situation and is usually the result of years of experience that allow managers to foresee needs way down the road. For example, mature project management organizations know to hire staff in anticipation of upcoming project needs and provide developmental opportunities to challenge and retain their best employees. By contrast, less experienced project management organizations identify project teams on a “just in time” basis that can compromise a project from the beginning. While having everything you want when you need it is the ideal, it’s rarely the norm in the permanent whitewater of the living order. In a changeable environment, resource allocation is all about adaptation. You might start by planning the necessary resources in a geometric way. However, altering circumstances could mean you need to revise your plan day-to-day. In other words, in living order, you need to actively manage the resources required for your project tasks. You can’t assume that because you’ve made a plan, every person and everything you need will show up on time, according to your plan. For example, in manufacturing, when installing a new piece of automation, a company absolutely must have maintenance, manufacturing engineers, and control staff available to jump in when they are needed. The schedule might spell out detailed dates, but everyone still has to remain flexible because their piece of the work will affect the overall timeline. Adding to the complexity of the situation, the personnel working on your project may have to be available to staff the rest of the plant. They might suddenly need to postpone work on your automation project in order to work on a crisis affecting a particular customer order. And keep in mind that from one project to the next, your control over day-to-day assignment of resources could vary considerably. In most situations, project managers need to coordinate, negotiating and contracting with others to get resources when they need them. What’s more, there’s often a time lag between identifying the need for a resource and getting it deployed. This is especially true for research-oriented tasks, in which resolving unknowns sets the pace of progress. Getting Creative with Resource Management Successful project managers aren’t afraid to get creative in their approach to resource management. An interesting example of resource control in highway construction requires contractors to rent lanes per day until they finish working on them. This method is described here: http://www.dot.state.mn.us/const/tools/documents/Lanerentalonly.pdf. In some situations, you shouldn’t even assume that you know what resources you will need in the first place. It can be hard to accept this fact, even in organizations that have fully embraced living order. In their article “Managing Resources in an Uncertain World,” John Hagel, John Seely Brown, and Lang Davison argue that even the most well-conceived pull plan—a plan informed by the best precepts of Lean—can be limited by the assumption that the planners know exactly what resources they need in the first place. To overcome this challenge, they argue for an even more flexible version of pull planning: In a world of accelerating change, we no longer can be certain we know what to seek. What happens when we don’t even know that a product or person exists, yet that product or person is highly relevant to our needs of the moment? Lean manufacturing systems at least assume that we know what we need at any point in time…. Increasingly, we need pull platforms that can bring us relevant resources that we did not know existed but are useful to us. They must do this in a scalable fashion as well since the resources may be in a remote part of the world or developed by individuals who are just beginning to become visible with newly acquired skills. In other words, these pull platforms must offer serendipity as well as robust search capability. (Hagel III, Brown and Davison 2009) Seeing the Big Picture Resource allocation occurs within a broader organizational context, subject to pressures that go beyond an individual project. That means that getting and using the resources you need, when you need them, is rarely as simple as it might seem when spelled out in a project schedule. As discussed in Lesson 2, in a well-run organization, project selection is guided by the organization’s overall strategy. The same is true of resource allocation; decisions about what resources will be available to which projects are, ideally, made in alignment with the organizational strategy. For project managers, it’s important to keep this in mind. It might be better for you and your project to have access to a certain resource on a certain day, but that might not necessarily be the best option for the organization as a whole. Other realities can affect your ability to gain access to and pay for a resource. In the case of a scarce piece of equipment, you might try to reserve it for more time than is strictly necessary, so you can use it when you need it. This allows you to purchase flexibility, but that flexibility might be more expensive than you can afford. However, if you let a critical resource go, you might not get it back when you need it, or you might need to pay a charge for reactivating the resource. In other situations, you may be forced to pay for more than you need. For example, in projects involving union labor, you might have to pay for a half-day of labor for someone to operate equipment that you actually only need for two hours. All of these factors affect the reality of getting a resource, how much it costs, and when it is available. Another important factor affecting the allocation of resources is the often intense competition for resources within an organization. In an article describing how the pharmaceutical company SmithKline Beecham (SB) improved its resource-allocation process, Paul Sharpe and Tom Keelin acknowledge the realities of intra-organizational competitiveness: How do you make good decisions in a high-risk, technically complex business when the information you need to make those decisions comes largely from the project champions who are competing against one another for resources? A critical company process can become politicized when strong-willed, charismatic project leaders beat out their less competitive colleagues for resources. That in turn leads to the cynical view that your project is as good as the performance you can put on at funding time…. One of the major weaknesses of most resource-allocation processes is that project advocates tend to take an all-or-nothing approach to budget requests. At SB, that meant that project leaders would develop a single plan of action and present it as the only viable approach. Project teams rarely took the time to consider meaningful alternatives—especially if they suspected that doing so might mean a cutback in funding. (1998) The improved resource allocation process that Sharpe and Keelin developed was systematic and value-driven, but the key to their approach came down to one thing: better communication among project managers and other stakeholders. This, in turn, allowed them to trust each other, so they could turn their attention to the company’s overall strategic goals rather than skirmishing over available resources. Sharpe and Keelin found that “by tackling the soft issues around resource allocation, such as information quality, credibility, and trust, we had also addressed the hard ones: How much should we invest and where should we invest it?” In other words, resource allocation is yet another area of project management in which good communication can help smooth the way to project success. Over-Commitment, Over-Allocation, and Risk Management Resource allocation is inextricably tied up with risk management. If you fail to secure the resources you need, when you need them, you risk delays, mounting costs, and even project failure. Two of the most common ways that a needed resource can suddenly become unavailable to your project are • over-commitment: A resource allocation error that occurs when a task takes longer than expected, tying up the resource longer than originally scheduled. • over-allocation: A resource allocation error that occurs when a resource is allocated to multiple projects with conflicting schedules, making it impossible for the resource to complete the assigned work on one or more of the projects as scheduled. In an article for TechRepublic.com, Donna Fitzgerald explains the distinction: An individual can theoretically be over-allocated to many projects; an individual can be overcommitted only to a specific body of work. The reason for this distinction is that over-commitment and over-allocation really are two separate problems. If an individual is assigned a task and the work on that task turns out to be twice the effort originally estimated—and the project duration isn’t moved out—the individual is overcommitted. If a person is allocated to multiple projects, then it’s an issue of over-allocation. I believe that problems arise because of a failure to admit that a single person can’t be in two places at the same time. (Fitzgerald 2003) Fitzgerald argues that over-commitment is a problem that a project manager can typically resolve within the confines of individual projects. Over-allocation, by contrast, is something that can only be fully solved “at the organizational level…by establishing clear project priorities and a clear process for mediating the inevitable conflict in priorities” (Fitzgerald 2003). The unfortunate fact is that if you face an organization-wide over-allocation problem, you may have no option but to deal with it as best you can. Successful project managers learn to ride the waves of over-allocation whitewater, making do with the resources made available to them: In the final analysis, resource overallocation is a failure of prioritization, a failure of planning, and a failure to accept that reality always imposes constraints. The nimble project manager understands that things will always change and that even in the best of systems there will be times when multiple projects are competing for the same resource. The only way to really solve this problem is by eliminating unnecessary conflicts in the initial planning stages through prioritization and project timing and by establishing the discipline to make conscious decisions about which projects slip and which stay on track when Murphy’s Law comes into play. (Fitzgerald 2003) But is there something individual project managers can do to prevent over-allocation from causing havoc with their projects? Fitzgerald suggests that you start by learning more about how resources are allocated in your organization. A good way to do this is to recruit “other project managers into a Community of Practice” as she explains in this helpful article: http://www.techrepublic.com/article/with-a-little-help-from-my-friends-exploring-communities-of-practice-in-project-management/. Fitzgerald argues that such groups can go a long way towards resolving all sorts of project team rivalries, including rivalries involving resource allocation: The key is to get a group of PMs together and to establish a planning committee that would work to keep PMs from stepping all over one another. Simply making the decision to avoid letting the situation reach the crisis point and to open up the communication channels will begin to reduce the probability that resources are mythically overallocated. (Fitzgerald 2003) Fitzgerald also suggests applying risk management techniques to critical resources from the earliest days of project planning: As a general practice, I begin every project by identifying my critical resources and developing a contingency plan for replacement or substitution of those resources in the event of an emergency…. By establishing nothing more than the most minimal practice of risk management, you can ensure that resource problems are brought to light early in the project life cycle rather than later when the solutions are more limited and more expensive. (Fitzgerald 2003) 10.2 Geometric Resource-Optimization Techniques So far, we’ve focused on several ways living order can disrupt your best-laid resource allocation plans. But that’s not to say that, when thinking about resources, you should dispense with careful, geometric-order planning. Far from it. In the next two sections, we look at some helpful resource-optimization techniques. Resource Leveling and Resource Smoothing Two important resource allocation tools available to a project manager are resource leveling and resource smoothing. These techniques make use of slack (also called float) which, as you learned in Lesson 7, is the amount of time that a task can be delayed without causing a delay to subsequent tasks or the project’s completion date. Understanding the distinction between the resource leveling and resource smoothing can be tricky, so let’s start with basic definitions: • resource leveling: An “approach to project scheduling whereby task start and end dates are determined by the availability of internal and external resources…. Resource leveling will resolve over-allocations by moving task start and end dates, or extending task durations in order to suit resource availability” (ITtoolkit n.d.). Resource leveling may modify the critical path or extend the duration of the project, depending on the availability of critical resources, and the ability to accomplish required leveling using available slack/float. • resource smoothing: “A scheduling calculation that involves utilizing float or increasing or decreasing the resources required for specific activities, such that any peaks and troughs of resource usage are smoothed out. This does not affect the overall duration” (Association for Project Management n.d.). Because of the complexities involved, both resource leveling and resource smoothing are typically done using project management software such as Microsoft Project. A blog post for the Association for Project Management distinguishes between resource leveling and resource smoothing as follows: Resource smoothing is used when the time constraint takes priority. The objective is to complete the work by the required date while avoiding peaks and troughs of resource demand. Resource leveling is used when limits on the availability of resources are paramount. It simply answers the question “With the resources available, when will the work be finished?” (Association for Project Management n.d.) In resource leveling, the project manager moves resources around in the schedule in order to level off some of the peaks and valleys of resource requirements. Task start dates are modified as necessary to use slack wherever possible to reduce resource conflicts. If necessary, activity start dates are shifted further to eliminate resource constraints; these shifts beyond initial slack constraints extend the duration of the project. You can see some examples of resource leveling here: http://www.mpug.com/articles/resource-leveling-best-practices/. Even after judicious resource leveling, you may still find that demand for one or more resources exceeds existing constraints in order to meet a schedule requirement. For example, you might find that you simply don’t have enough experienced electricians on-staff to complete a task by a fixed milestone date. In that case, you will need to consider adding resources to the project—for example, perhaps by hiring some electricians from another firm. But remember that bringing on new resources may temporarily slow the project due to the time it takes for both the project team and the new resource to adjust. When facing insurmountable resource constraints, you might find that you simply have to extend the schedule or modify the project’s scope. Note that resource leveling, as described here, is rarely appropriate in the world of software development projects. Unless the people on the project have experience that is relevant to the tasks to be accomplished and have worked on similar projects with well-defined scope, then resource leveling may not prove useful. However, resource leveling can be useful in software consulting firms that perform system upgrades for clients and have established a repeatable process for doing the upgrade. Reducing Resource Use Through Schedule Compression: Yes and No Managers often assume that the schedule compression techniques discussed in Lesson 7 can have the side benefit of reducing indirect costs for resources like maintenance personnel, administrative staff, or office space. This is true in some fields, making it a very useful option. But it doesn’t typically work in the IT world, as documented by Steve McConnell in his book Rapid Development: Taming Wild Software Schedules. He explains that focusing too much on schedules at the expense of other resource-intensive work such as planning and design will almost always result in a late project: You can use the strongest schedule-oriented practices, but if you make the classic mistake of shortchanging product quality early in the project, you’ll waste time correcting defects when it’s most expensive to do so. Your project will be late. If you skip the development fundamental of creating a good design before you begin coding, your program can fall apart when the product concept changes partway through development, and your project will be late. And if you don’t manage risks, you can find out just before your release date that a key subcontractor is three months behind schedule. You’ll be late again. (1996, 9) Critical Chain Approach In Lesson 7 you learned about the critical path method of schedule management, which helps identify the minimum total time required to complete a project—that is, the critical path. This way of thinking about a project focuses on finding the right order for tasks within the schedule. By contrast, a related scheduling method, the critical chain method (CCM), focuses on the resources required to complete a project, adding “time buffers to account for limited resources” (Goodrich 2018). Critical chain management was first introduced by Eliyahu M. Goldratt in his 2002 book Critical Chain. To learn more about this important topic, start by reviewing this summary: https://www.simplilearn.com/what-is-critical-chain-project-management-rar68-article. This helpful video explains the basic concepts related to the critical chain method: https://www.youtube.com/watch?v=mpc_FdAt75A. 10.3 Estimating Resource Capacity in Agile In theory, resource management in Agile should be simple. After all, in Agile, resources and time are usually fixed. The team has a fixed budget, a fixed number of programmers, and a fixed amount of time to create working software. The variable in all this is the software itself. Throughout the cycle of sprints—as the customer tries out new software, and requests alterations—the software features can change dramatically. When the budget is exhausted, the project ends. But because Agile developers create working software bit-by-bit, the customer is assured of having at least some usable features by that point. So again, resource management in Agile should be simple—in theory. But in reality, the key resource in software development is the people who create the software. And as you learned in the discussion on teams in Lesson 5, where people are concerned, things rarely go as planned. Some programmers work faster than others, and individuals can vary tremendously in their output from one week to the next, especially when dealing with personal problems, like illness or family conflict. Robert Merrill, a Senior Business Analyst at the University of Wisconsin-Madison, and an Agile coach, puts it like this: Agile is more about people than computers. People are not interchangeable, they have good days and bad days. They get along or they don’t. Cognitive abilities vary tremendously. If you aren’t successful in helping teams gel and stay focused, you’re going to spend lots of extra money, or the project may blow up. You need to get the teams right. (Merrill 2017) As Gareth Saunders explains in a thoughtful blog post on the topic, this is all complicated by the amount of “business as usual” tasks that developers typically have to fit into their schedules on top of their work on specific Agile projects. This includes tasks like “admin, team communications, support, mentoring, meetings, and consultancy—offering our input on projects managed by other teams” (Saunders 2015). As a result, as a project manager, Saunders struggles to answer the following questions: 1. How do we know how much time each team member has to work on projects? 2. When we’re planning the next sprint, how do we track how much work has been assigned to a team member, so that they have neither too little nor too much work? (Saunders 2015) Again, in theory, this should not be difficult. If you have, for instance, “five developers, each with 6 hours available for work each day. That gives us 30 hours per day, and assuming 9 days of project work (with one full day set aside for retrospective and planning) then within each two-week sprint we should be able to dedicate 270 hours to development work” (Saunders 2015). In reality, however, business as usual tasks can eat up 40% of a programmer’s working week, with that percentage varying from week to week or month to month. Difficulties in estimating a team member’s capacity for work on a project is something every project manager faces. But in Agile, estimating capacity can be especially difficult. As you learned in Lesson 5, in Agile, project managers (or Scrum masters) ideally exert minimal direct influence on day-to-day work, because teams are supposedly self-organizing—that is, free to manage their work as a group, and pull work when they are ready for it. This means Agile project managers need to take the long view on resource management by practicing good resource capacity management, which involves “planning your workforce and building a skill inventory in exact proportion to the demand you foresee. It lets you optimize productivity and as a concept perfectly complements the Agile methodology” (Gupta 2017). Interested in learning more about managing resources in Agile? Start with these links: 10.4 Resources and the Triple Bottom Line When making decisions about resources, you may naturally focus on what will allow your team to finish a project as quickly and efficiently as possible. As a result, you might be tempted to make decisions that use more fuel than is strictly necessary, exploit cheap labor, or pollute a local lake. But that approach fails to take into account the longer view on personal and organizational responsibility that lies at the core of the sustainability movement. John Elkington introduced the term triple bottom line (TBL) as a way to broaden corporate thinking about the cost of doing business to include social and environmental responsibilities. Rather than focusing solely on profit and loss, Elkington argued that organizations should pay attention to three separate bottom lines: One is the traditional measure of corporate profit—the “bottom line” of the profit and loss account. The second is the bottom line of a company’s “people account”—a measure in some shape or form of how socially responsible an organization has been throughout its operations. The third is the bottom line of the company’s “planet” account—a measure of how environmentally responsible it has been. The triple bottom line (TBL) thus consists of three Ps: profit, people, and planet. It aims to measure the financial, social, and environmental performance of the corporation over a period of time. Only a company that produces a TBL is taking account of the full cost involved in doing business. (The Economist 2009) More and more, organizations are incorporating sustainability concerns into their long-term strategies, in part because their customers demand it, and in part because the sustainable choice often turns out to be the profitable choice. If you are lucky enough to work for an organization that is fully invested in its triple bottom line, you will be encouraged to make resource allocation decisions that reflect sustainability concerns. If your organization isn’t there yet, consider staking out a position as an agent of change, educating colleagues about the benefits of the triple bottom line. You can start by educating yourself. The following resources are a good first step: • Cannibals with Forks: The Triple Bottom Line of 21st Century Business: In this 1999 book, John Elkington first introduced the idea of the triple bottom line. (Note that it was originally titled Cannibals with Food Rakes, and often shows up with that title in web searches.) • This brief introduction summarizes the basic issues related to the triple bottom line: https://www.economist.com/node/14301663. 10.5 From the Trenches: John Nelson on Resources at the Portfolio Level As an executive concerned with the well-being of an entire organization, John Nelson has to look at resource management on a portfolio level. Whereas individual project managers naturally focus on short-term resource availability for their projects, an executive’s goal is ensuring that resources are available for many projects over the long term. In a recent lecture, he offered some thoughts on managing resources at the portfolio level: Whether you’re deploying capital resources, outside resources, or your own internal staffing resources, it’s almost axiomatic that you will face resource constraints in living order. When considering how a particular project fits within a larger portfolio, you need to keep in mind the organization’s resource elasticity, and the organization’s ratio of percentage of creative personnel. Let’s start with resource elasticity. Organizations can be whipsawed by projects that are so large they consume a disproportionate amount of the organization’s resources. If a project like that ends abruptly, for an unexpected reason, the organization will struggle to get project resources redeployed. To avoid this, it’s a good idea to make sure no project exceeds one-third (or in some cases one-fourth) of the organization’s total capacity. Now let’s consider the critical ratio of creative people to people who excel at execution. Some projects require a lot of creativity and thinking. Some just require execution. If you have a portfolio of highly creative assignments, but a resource base that’s largely execution-oriented, you’re going to struggle. The opposite is also true: if you have lots of execution-oriented projects with only highly creative people on staff, you might complete the project successfully, but you’ll probably burn through resources faster than you want, because creative people aren’t as efficient and effective at execution. It almost goes without saying, though, that you do have to have creative people in your organization. In living order, it’s rare that I come across a project that doesn’t involve any creative people. My rule of thumb is to have about 30% of my staff to be highly creative. This has worked well for me, although sometimes 40% or even 50% is best. You have to keep these kinds of concerns in mind as you look at projects in portfolios, at the organizational level, to make sure that over the long term you have a reasonable chance of meeting the value proposition, meeting the customer’s expectations, and maintaining the health of your organization. (Nelson 2017) 10.6 Externalities and Looking to the Future A project manager with a serious appreciation for living order understands that external factors may fluctuate during project execution, making previously widely available resources impossible to obtain. For example, there may be a run on certain materials, or a certain type of expertise might suddenly be consumed by an emergency somewhere in the world. Any development like this can force you to rethink your original expectations. You need to be prepared to adapt your budget, scope, and schedule to the realities that evolve during project execution. Keep in mind that resources can become suddenly scarce. For example, right now materials engineers are a hot commodity, because more engineers are retiring than entering this field. Compounding the problem, new designs and manufacturing techniques have expanded the need for materials engineers. A 2018 check of Indeed.com turned up over 47,000 openings for materials engineers. As you might expect, new engineering students are responding to the call. At the UW-Madison, enrollment in this area of engineering has grown dramatically. But it will still be a while until there is enough materials expertise to go around. And keep in mind that a constraint on the availability of resources is not necessarily the worst thing that can happen to an organization or to an individual project. In fact, the origins of Lean and the Toyota Production System can be traced back to resource constraints in Japan at the end of World War II. In an article for MIT Sloan Management Review, Michael Gibbert, Martin Hoegl, and Liisa Välikangas argue that abundant resources can sometimes stifle innovation: Resource constraints fuel innovation in two ways. In a 1990 article in Strategic Management Journal, J.A. Starr and I.C. MacMillan suggested that resource constraints can lead to “entrepreneurial” approaches to securing the missing funds or the required personnel. For example, the Game Changer innovation program of Royal Dutch Shell Plc long operated on the shoulders of its social network, which allowed innovators to find technically qualified peers willing to contribute to their efforts on a complimentary basis. In other words, individuals innovate despite the lack of funding by using social rather than purely economic strategies. Thus tin-cupping, horse trading, boot strapping, and currying personal favors partly or wholly substitute for economic transactions in which non-entrepreneurial innovators (or those less socially connected) would pay the full price. Such efforts speak for “resource parsimony”—deploying the fewest resources necessary to achieve the desired results. For instance, new product development teams might use testing equipment on weekends, when it is readily available and free. Likewise, team members might know engineers or other professionals—say, from supplier firms involved in past projects—who would be glad to give informal design reviews in anticipation of future remunerative work. Resource constraints can also fuel innovative team performance directly. In the spirit of the proverb “necessity is the mother of invention,” teams may produce better results because of resource constraints. Cognitive psychology provides experimental support for the “less is more” hypothesis. For example, scholars in creative cognition find in laboratory tests that subjects are most innovative when given fewer rather than more resources for solving a problem. The reason seems to be that the human mind is most productive when restricted. Limited—or better focused—by specific rules and constraints, we are more likely to recognize an unexpected idea. (Gibbert, Hoegl and Välikangas 2007) Gibbert et al. argue that managers with access to all the resources they could possibly want tend to fall into the trap of throwing money at problems, rather than sitting down to think of effective solutions to the kinds of problems that arise in the permanent whitewater of the modern business world. Then, when projects fail, “rationalizations often start with excuses such as ‘We ran out of money’ or ‘If only we had more time.’ In such cases, the resource-driven mindset may well have backfired. Resource adequacy is in the eye of the beholder, and if a team has the perception of inadequate resources, it may easily be stifled.” Gibbert et al. describe several projects in which resource constraints turned out to be a blessing, not a curse. For example: In the post–World War II era, several American teams under General Electric Co., and several German teams under Bayerische Motoren Werke AG were competing against each other in a race to resolve the jet engine performance dilemma. The stakes were high, given that the Cold War had started and the West was eager to come up with reliable jet technology before the Soviet Union did. The German team eventually won by proposing a radical departure from the status quo, an innovation that is in fact is still used today. It developed a “bypass” technology in which the rotor blades and other engine parts most exposed to high temperatures were hollowed out so that air could flow through them, thereby cooling them off. Whence this idea? The American team had a virtual blank check to buy whatever costly raw materials it needed to create the most heat resistant alloys (the Cold War jet propulsion development program cost the U.S. government nearly twice as much as the Manhattan Project). The German team, by contrast, was forced to rely on cheaper alloys, as it had significantly less funding at its disposal and simply couldn’t afford the more expensive metals. (Gibbert, Hoegl and Välikangas 2007) Don’t underestimate the management hours required to keep track of a high number of resources. For example, an experienced manager of engine-related projects reported that more than 50 core team members was too many for one project manager to keep track of. With over 50 team members, the burden of coordination and communication often outweighed the benefit of extra resources. Resource Management and Proactive Resilience In their book Becoming a Project Leader, Alexander Laufer, Terry Little, Jeffrey Russell, and Bruce Maas discuss the benefits of proactive resilience—taking timely action to prevent a crisis, often by introducing a change that upends the usual way of doing things. In living order, where resource availability is never a given, proactive resilience is an essential component of good resource management. As an example of proactive resilience in action, Laufer et al. describe the work of Don Margolies, a project manager in charge of NASA’s Advanced Composition Explorer, a robotic spacecraft launched into orbit in 1997 to collect data on solar storms. At one point, facing a \$22 million cost overrun related to the development of nine scientific instruments, his dramatic intervention ultimately saved the project: Don concluded that unless he embarked on an uncommon and quite radical change, the project would continue down the same bumpy road, with the likely result that cost, and time objectives would not be met. To prevent this, he made an extremely unpopular decision: He stopped the development of the instruments, calling on every science team to revisit its original technical requirements to see how they could be reduced. In every area—instruments, spacecraft, ground operation, integration and testing—scientists had to go back and ask basic questions, such as “How much can I save if I take out a circuit board?” and “How much performance will I lose if I do take it out?” At the same time, Don negotiated a new agreement with NASA headquarters to secure stable funding, detached from the budget of the other six projects affiliated with the Explorers program. To seal the agreement, he assured them that by reducing his project’s scope, it would not go over budget. With the reduced technical scope and the stable budget, the ACE project gradually overcame both its technical and organizational problems. Eventually, it was completed below budget, and the spacecraft has provided excellent scientific data ever since. (Laufer, et al. 2018, 57) Resource parsimony is not the answer to every resource allocation problem, but it can definitely stimulate new and effective approaches that might otherwise go undiscovered. In the same way, the many living order challenges facing today’s organizations can encourage managers to develop new ways to manage resources. ~Practical Tips • Similar does not mean equal: Similar resources are not necessarily interchangeable. For instance, two people might work under the title “Senior Designer.” However, because of education and experience, one of them might be far more suited to your project. The problem is, computerized resource allocation methods often fail to distinguish differences among similar resources. Whenever possible, take the time to evaluate the people and other resources that are key to your project to ensure that you have allocated the appropriate resources. Plan projects based on an average capability resource. That way, across all projects, the estimates should even out to be about right. If it takes a good designer three days to design a part and a less capable designer five days, you should plan on four days for designer time. • Economic downturns and upturns can affect resource availability: Economic conditions influence the cost and availability of high-demand resources. You might need some expertise that changing economic conditions or changing technical requirements make it difficult for you to get when you need it. The same may be true in reverse. Sometimes, because the economy is in a downturn, certain resources become more available. These factors may influence the cost and availability of resources needed for your project. • Share resource allocation decisions to gain buy-in: If possible, try to make resource allocation decisions available to your entire organization. This will encourage people outside your specific project to buy in to your project’s goals. It can also help minimize the kind of resentment that arises when project managers are competing for scarce resources. This phenomenon is explained in the blog post “5 Ways Top Project Managers Allocate Their Resources”: Resourcing isn’t just for your team—it applies to the rest of your company too. Think beyond project life cycle planning; when allocations are visible to everyone, the entire agency can see how pieces fit together and where their “quick tweaks” or internal projects align with the grand scheme of things. This can significantly cut down on emails, facilitate conversations that would otherwise require rounds of meetings, and serve as a precursor to monthly budget reviews or executive presentations. A resourcing system visible to key parties and departments, and sortable by tasks and skills, can help tremendously while preparing budgets and schedules. Top project managers make sure the bigger picture is always in perspective. (MICA 2014) • Keep marginal costs in mind: Economies of scale prevail in resource management, but only to a certain point. You need to keep in mind the marginal cost of a resource. For example, the hourly cost of labor may be fixed to a point, but once you move from regular hours to overtime hours, the marginal cost increases significantly. So always look at the marginal cost of existing personnel or equipment hours, versus the new marginal cost of adding personnel or equipment hours. • Think strategically about who should control a particular resource: As you have more control over a resource at the project level, you typically have more cost for carrying that resource through the project. That does give you more flexibility, but what is best at the project level may not be best for the overall organization. It’s possible that having a resource controlled at the organizational level may give greater flexibility for the organization overall. • Understand minimum units of allocation: It’s rarely helpful to allocate 3.8 people to a task. Instead, it is almost always more realistic to allocate 4 people full-time. Similarly, most facilities can only be realistically hired by the day/week and not by the hour/minute. Understanding minimum units of allocation is important in realistic planning. • Plan for shared resources: In an ideal world, all resources are dedicated solely to your project. However, it is more common to have shared resources. If you are working on a project with shared resources, you’ll need to schedule your use of those resources even more carefully than if they were dedicated solely to your project. • Be prepared to wait for resources: Some equipment or facilities have to be booked in advance. Once you book them, you may not have flexibility to change your dates. This is a good opportunity to practice contingency planning: what other work can continue while you wait for a resource to become available? • Beware personnel turnover: In long-running projects, highly skilled people retiring or moving on to new jobs can be a major issue, and something you should beware of as you allocate human resources to your project. Any transition of key leadership can have an impact on a project’s progress and directly affect its overall success. Do all you can to proactively manage transitions throughout a project. Managing Transitions: Making the Most of Change, by William Bridges, is a classic resource on managing change in the workplace. It includes practical assessments that the readers can use to improve their own transition management skills. • Allocate resources by name when necessary: If a specific resource—such as a particular test cell or person—is essential to project success, then take care to allocate that resource on a named basis, rather than as a general category of resource—for example, “Anita Gomez,” rather than “Designer.” However, you should avoid this specificity in all but the most critical cases, as it reduces flexibility and hinders developmental opportunities (increasing general bench strength). • Do all you can to prevent burnout: Be careful of overextending the people on your team. Stretching to the point of strain can cause unnecessary turnover, with no extra hours available for pitching in at crunch times. A good rule of thumb is to allocate a person 85%; this leaves time for vacation, development, and company projects. ~Summary • Resource management is about making sure you have the resources you need at the right time, but it’s also about avoiding stockpiling resources unnecessarily (and therefore wasting them). The most detailed schedules and budgets in the world are useless if you don’t have the people, equipment, facilities, and other resources you need, when you need them. Until you have assigned and committed resources, you don’t have a project schedule and your budget has no real meaning. • The essence of resource allocation is resource loading, or the process of assigning resources (most often people) to each and every project activity. While having everything you want when you need it is the ideal, it’s rarely the norm in the permanent whitewater of living order. In a changeable environment, resource allocation is all about adaptation and seeing the big picture. • Resource allocation is inextricably tied up with risk management. If you fail to secure the resources you need when you need them, you risk delays, mounting costs, and even project failure. Two of the most common ways that a needed resource can suddenly become unavailable to your project are over-commitment (which occurs when a task takes longer than expected, typing up a resource longer than expected) and over-allocation (which occurs when a resource is allocated to multiple projects with conflicting schedules). • For resource allocation, two important geometric-order tools are resource leveling and resource smoothing. Another helpful option is a scheduling method known as the critical chain method (CCM), which focuses on the resources required to complete a project. • In Agile, where time and money are typically fixed, managing resources is theoretically a simple matter. However, the self-organizing nature of Agile teams presents special resource allocation challenges which can be overcome through resource capacity management. • John Elkington introduced the term triple bottom line (TBL) as a way to broaden corporate thinking about the cost of doing business to include social and environmental responsibilities. Elkington argued that rather than focusing solely on profit and loss, organizations should pay attention to three separate bottom lines: profit, people, and the health of the planet. • Whereas individual project managers naturally focus on short-term resource availability for their projects, an executive’s goal is ensuring that resources are available for many projects over the long term. When looking at resources from the portfolio level, try to make sure no project exceeds one-third to one-fourth of the organization’s total capacity. Also keep in mind that some projects require a healthy contingent of highly creative people, but too many creative people on a project can hamper execution. A good rule of thumb is to have about 30% of staff be highly creative. • You need to be prepared to adapt your budget, scope, and schedule to the externalities that evolve during project execution. And keep in mind that a constraint on the availability of resources is not necessarily the worst thing that can happen to an organization or to an individual project. You can forestall crises related to resources by practicing proactive resilience—that is, by taking timely action to prevent a crisis, often by introducing a change that upends the usual way of doing things. In living order, where resource availability is never a given, proactive resilience is an essential component of good resource management. ~Glossary • fixed resource—A resource that “remains unchanged as output increases” (Reference n.d.). • over-allocation—A resource allocation error that occurs when more work is assigned to a resource than can be completed within a particular time period, given that resource’s availability. • over-commitment—A resource allocation error that occurs when a task takes longer than expected, tying up the resource longer than originally scheduled. • proactive resilience—Taking timely action to prevent a crisis, often by introducing a change that upends the usual way of doing things at an organization (Laufer, et al. 2018, 56). • resource allocation—The “process of assigning and managing assets in a manner that supports an organization’s strategic goals” (Rouse n.d.). On the project level, resource allocation still involves making choices that support the organization’s strategic goals, but you also have to factor in your project’s more specific goals. • resource capacity management—The practice of “planning your workforce and building a skill inventory in exact proportion to the demand you foresee. It lets you optimize productivity and as a concept perfectly complements the Agile methodology” (Gupta 2017). • resource leveling— An approach to project scheduling that aims to avoid over-allocation of resources by setting start and end dates according to the “availability of internal and external resources” (ITtoolkit n.d.). • resource management—See resource allocation. • resource parsimony—“Deploying the fewest resources necessary to achieve the desired results” (Gibbert, Hoegl and Välikangas 2007). • resource smoothing—“A scheduling calculation that involves utilizing float or increasing or decreasing the resources required for specific activities, such that any peaks and troughs of resource usage are smoothed out. This does not affect the overall duration” (Association for Project Management n.d.). • triple bottom line (TBL)— Term introduced by John Elkington as a way to broaden corporate thinking about the cost of doing business to include social and environmental responsibilities. He argued that rather than focusing solely on profit and loss, organizations should pay attention to three separate bottom lines: profit, people, and the planet. “It aims to measure the financial, social and environmental performance of the corporation over a period of time. Only a company that produces a TBL is taking account of the full cost involved in doing business” (The Economist 2009). • variable resource—A resource that changes “in tandem with output” (Reference n.d.).
textbooks/biz/Management/Book%3A_Technical_Project_Management_in_Living_and_Geometric_Order_(Russell_Pferdehirt_and_Nelson)/1.10%3A_Allocating_and_Managing_Constrained_Resources.txt
Information is a source of learning. But unless it is organized, processed, and available to the right people in a format for decision making, it is a burden not a benefit. —William Pollard Learning Objectives After reading this chapter, you will be able to • Explain the importance of designing good monitoring practices • Describe elements of effective project monitoring and controlling • Understand how to decide what to monitor and when, and list some useful items to monitor • Distill monitoring information into reports that are useful to different stakeholders • Describe features of a good project dashboard • Compare pure, instinctual intuition to informed intuition • Explain how linearity bias can mislead assessment of project progress The Big Ideas in this Lesson • As a project manager, you have to balance looking through the front window to see where the project is headed with well-timed glances at the dashboard, while occasionally checking your rearview mirror to see if you might have missed something important. • Different types of projects require different approaches to monitoring, analytics, and control. But any technique is only useful if it enables you to learn and respond. An excessive focus on measurement, without any attempt to learn from the measurements, is not useful. • When reporting on the health of their projects, successful project managers tailor the amount of detail, the perspective, and the format of information to the specific stakeholders who will be consuming it. • Gut instinct, or pure intuition, can make you vulnerable to the errors caused by cognitive biases. Instead, aspire to informed intuition, a combination of information and instinctive understanding acquired through learning and experience. 11.1 Monitoring for Active Control Cannon Balls Versus Guided Missiles Launching a project with no expectation of having to make changes to the plan as it unfolds is like firing a cannonball. Before you do, you make ballistic calculations, using assumptions about cross winds and other conditions. After the cannonball leaves the cannon, you can monitor its progress, but not control for changes. The cannonball might hit its target if your assumptions are correct and the target doesn’t move. But if any of your assumptions turn out to be incorrect, you will miss your target. In contrast, you can correct the course of a guided missile during flight to account for changing conditions, such as a gust of wind or a moving target. A guided missile requires sophisticated monitoring and control capabilities, but is more likely to hit the target, especially under dynamic conditions. Successful project managers take the guided missile approach, correcting course as a project unfolds to account for the unexpected. The best project managers succeed through an artful combination of leadership and teamwork, focusing on people, and using their emotional intelligence to keep everyone on task and moving forward. But successful project managers also know how to gather data on the health of their projects, analyze that data, and then, based on that analysis, make adjustments to keep their projects on track. In other words, they practice project monitoring, analytics, and control. Note that most project management publications emphasize the term monitoring and control to refer to this important phase of project management, with no mention of the analysis that allows a project manager to use monitoring data to make decisions. But of course, there’s no point in collecting data on a project unless you plan to analyze it for trends that tell you about the current state of the project. For simple, brief projects, that analysis can be a simple matter—you’re clearly on schedule, you’re clearly under budget—but for complex projects you’ll need to take advantage of finely calibrated data analytics tools. In this lesson, we’ll focus on tasks related to monitoring and control, and also investigate the kind of thinking required to properly analyze and act on monitoring data. Generally speaking, project monitoring and control involves reconciling “projected performance stated in your planning documentation with your team’s actual performance” and making changes where necessary to get your project back on track (Peterman 2016). It occurs simultaneously with project execution, because the whole point of monitoring and controlling is making changes as team members perform their tasks. The monitoring part of the equation consists of collecting progress data and sharing it with the people who need to see it in a way that allows them to understand and respond to it. The controlling part consists of making changes in response to that data to avoid missing major milestones. If done right, monitoring and controlling enables project managers to translate information gleaned by monitoring into the action required to control the project’s outcome. A good monitoring and control system is like a neural network that sends signals from the senses to the brain about what’s going on in the world. The same neural network allows the brain to send signals to the muscles, allowing the body to respond to changing conditions. Because monitoring and controlling is inextricably tied to accountability, government web sites are a good source of suggestions for best practices. According to the state of California, monitoring and controlling involves overseeing all the tasks and metrics necessary to ensure that the approved and authorized project is within scope, on time, and on budget so that the project proceeds with minimal risk. This process involves comparing actual performance with planned performance and taking corrective action to yield the desired outcome when significant differences exist. The monitoring and controlling process is continuously performed throughout the life of the project. (California Office of Systems Integration n.d.) In other words, monitoring is about collecting data. Controlling is about analyzing that data and making decisions about corrective action. Taken as a whole, monitoring and controlling is about gathering intelligence and using it in an effective manner to make changes as necessary. Precise data are worthless unless they are analyzed intelligently and used to improve project execution. At the same time, project execution uninformed by the latest data on changing currents in the project can lead to disaster. Earned Value Management (EVM) is an effective method of measuring past project performance and predicting future performance by calculating variances between the planned value of a project at a particular point and the actual value. If you aren’t familiar with EVM, you should take some time to learn about it. The blog post provides a helpful summary: www.projectsmart.co.uk/earned-value-management-explained.php. The geometric order approach to monitoring and controlling focuses on gathering data about the past, and then using that information to estimate the future. This approach can be very helpful in some situations, but it is most effective when combined with a living order monitoring and controlling system, which does the following: • Looks at today and the immediate future. • Uses reliable promising to ensure that stakeholders commit to what needs to happen next. • Focuses on the project’s target value, modifying the path ahead as necessary to achieve the agreed-on target value. • Assumes a collaborative approach, in which stakeholders work together to decide how to adjust the project to deliver it at the target value. A living order monitoring and controlling system provides team members with the information they need to make changes in time to affect the project’s outcome. Such a system is forward-facing, looking toward the future, always scanning for potential hazards, making it an essential component of any risk management strategy. While it is essential to hold team members accountable for their performance, a monitoring and controlling system should focus on the past only in so far as understanding the past makes it possible to forecast the future and adjust course as necessary. Ideally, it should allow for rapid processing of information, which can in turn enable quick adjustments to the project plan. In other words, the best monitoring and controlling system encourages active control. Active control takes a two-pronged approach: • Controlling what you can by making sure you understand what’s important, taking meaningful measurements, and building an effective team focused on project success. • Adapting to what you can’t control through early detection and proactive intervention. The first step in active control is ensuring that the monitoring information is distributed in the proper form and to the right people so that they can respond as necessary. In this way, you need to function as the project’s nervous system, sending the right signals to the project’s muscles (activity managers, senior managers, clients, and other stakeholders), so they can take action. These actions can take the form of minor adjustments to day-to-day tasks, or of major adjustments, such as changes to project resources, budget, schedule, or scope. Notes from an Expert: Gary Whited Gary Whited, an engineer with 35 years of experience providing technical oversight of engineering projects for the Wisconsin Department of Transportation, and currently the program manager at the University of Wisconsin-Madison’s Construction and Materials Support Center, has thought a lot about project management throughout his career. The following, which is adapted from a lecture of his in 2014, summarizes his ideas on the four main steps involved in monitoring and control: 1. Measuring and tracking progress: This is the major step, one that requires a significant investment of time. Everything that follows depends on gathering accurate data. 2. Identifying areas where changes are required: This is where we put the information we’ve gathered into the context in which it is needed. 3. Initiating the needed changes: Here we take action, making any necessary changes in response to the monitoring data. 4. Closing the loop: In this step, we go back and evaluate any changes to verify that they had the intended effect, and to check for any unintended consequences. For example, if you made a change to one component (say the schedule), you need to ask what effect that change might have had on other components (such as the budget). These four steps look deceptively simple. But they add real complexity to any project. This is especially true of the last three steps, which involve things like change management and document control. Everyone takes measurements at the end of a project, but that’s not all that helpful, except to serve as lessons learned for future projects. By contrast, a well-implemented monitoring and control process gives stakeholders the power to make essential changes as a project unfolds (Whited 2014). 11.2 What to Monitor and When to Do It When setting up monitoring and controlling systems for a new project, it’s essential to keep in mind that not all projects are the same. What works for one project might not work for another, even if both projects seem similar. Also, the amount of monitoring and controlling required might vary with your personal experience. If you’ve never worked on a particular type of project before, the work involved in setting up a reliable monitoring and controlling system will typically be much greater than the up-front work required for a project that you’ve done many times before. For projects you repeat regularly, you’ll typically have standard processes in place that will make it easy for you to keep an eye on the project’s overall performance. Learning-Based Project Reviews Sometimes upper management owns the schedule of the project and requires ongoing assessment and monitoring in the form of project reviews, which typically have members of a board “sitting at a horseshoe-shaped table” while “a team member stands in front of them and launches a presentation.” The problem with such reviews is two-fold: 1) they can be somewhat severe and punitive, and 2) they can tear team members away from working on the project itself. In their book Becoming a Project Leader, Laufer et al. describe a learning-based project review, which makes reviews about troubleshooting problems rather than assessing performance. Laufer et al. describe the experience of Marty Davis, a project manager at NASA’s Goddard Space Flight Center, who “developed a review process that provided feedback from independent, supportive experts and encouraged joint problem solving”: The first thing Marty Davis did was to unilaterally specify the composition of the review panel to fit the unique needs of his project, making sure that the panel members agreed with his concept of an effective review process. The second thing he did was change the structure of the sessions, devoting the first day to his team’s presentations and the second day to one-on-one, in-depth discussions between the panel and the team members to come up with possible solutions to the problems identified on the first day. This modified process enabled Marty Davis to create a working climate based on trust and respect, in which his team members could safely share their doubts and concerns. The independent experts identified areas of concern, many of which, after one-on-one meetings with the specialized project staff and the review team’s technical specialists, were resolved. The issues that remained open were assigned a Request for Action (RFA). Eventually, Marty Davis was left with just five RFAs. This kind of approach to project reviews ensured a supportive, failure-tolerant environment, and with its emphasis on continuous learning, had long-term benefits for each team member. Exactly which items you need to monitor will vary from project to project, and from one industry to another. But in any industry, you usually only need to monitor a handful of metrics. There’s no need to over-complicate things. For example, when managing major construction projects for the Wisconsin Department of Transportation, Gary Whited, focused on these major items: • Schedule • Cost/budget • Issues specific to the project • Risk He also recommends monitoring the following: • Quality • Safety • Production rates • Quantities (Whited 2014) In other kinds of projects, you will probably need to monitor different issues. But it’s always a good idea to focus on information that can serve as early warnings, allowing you to change course if necessary. This typically includes the following: • Current status of schedule and budget • Expected cost to complete • Expected date(s) of completion • Current/expected problems, impacts, and urgency • Causes for schedule/cost overruns As Whited explains, the bottom line is this: “If it’s important to the success of your project, you should be monitoring it” (2014). Note that measuring the percent complete on individual tasks is useful in some industries, where tasks play out over a long period of time. But according to Dave Pagenkopf, in the IT world the percent complete of individual tasks is meaningless: “The task is either complete or not complete. At the project level, percent complete may mean something. You really do need to know which tasks/features are 100% complete. But sloppy progress reports can generate confusion on this point. 100% of the functions in a software product 80% complete is not the same as having 80% of the features 100% complete. A poorly designed progress report can make these can look the same, when they most definitely are not” (pers. comm., November 13, 2017). In addition to deciding what to monitor, you need to decide how often to take a particular measurement. As a general rule, you should measure as often as you need to make meaningful course corrections. For some items, you’ll need to monitor continuously; for others, a regular check-in is appropriate. Most projects include major milestones or phases that serve as a prime opportunity for monitoring important indicators. As Gary Whited notes, “The most important thing is to monitor your project while there is still time to react. That’s the reason for taking measurements in the first place” (2014). 11.3 Avoiding Information Overload As Chad Wellmon explains in his interesting essay, “Why Google Isn’t Making Us Stupid…or Smart,” the history of human civilization is the history of people trying to make sense of too much information (2012). As far back as biblical times, the writer of Ecclesiastes complained, “Of making books there is no end” (12:12). In the modern business world, we could update that famous quotation to read, “Of writing reports and sending emails there is no end.” Indeed, according to an article by Paul Hemp in the Harvard Business Review , many researchers argue that information overload is one of the chief problems facing today’s organizations, resulting in stressed out, demoralized workers who lose the ability to focus efficiently and think clearly because their attention is constantly being redirected; lost productivity and reduced creativity due to constant interruptions; and delayed decision-making caused by people sharing information and then waiting for a reply before they can decide how to proceed. According to Hemp, one study that focused on unnecessary email at Intel set the cost of “information interruptions” at “nearly \$1 billion” (Hemp 2009). So if you feel like you are drowning in a sea of information, you’re not alone. But as a project manager, you have the ability to shape all that data into something useful, whether by creating electronic, at-a-glance dashboards that collate vital statistics about a project, or by creating reports that contain only the information your audience needs. By doing so, according to Wellmon, you’ll be engaging in one of humanity’s great achievements—using technology to filter vast amounts of information, leaving only what we really need to know. As Wellmon puts it: “Knowledge is hard won; it is crafted, created, and organized by humans and their technologies.” When reporting on the health of their projects, successful project managers tailor the amount of detail, the perspective, and the format of information to the specific stakeholders who will be consuming it. Talking to your company’s CEO about your project is one thing. Talking to a group of suppliers and vendors is another. You need to assesss the needs of your audience and provide only the information that is useful or appropriate to them. For example, in a report to upper management on a software development project, you might include data reporting costs to date, projected cost at completion, schedule status, and any unresolved problems. The report is unlikely to include details regarding programming issues unless a supervising manager has the technical ability and interest to be involved in such details. Dashboards for the coding team, however, would need to highlight progress on key unresolved coding issues and planned follow-up actions. Brian Price, the former chief power train engineer for Harley-Davidson, and an adjunct professor in the UW Master of Engineering in Engine Systems program, says it’s helpful to think in terms of providing layers of information to stakeholders. At the very top layer is the customer, who typically only needs to see data on basic issues, such as cost and schedule. The next layer down targets senior management, who mostly need to see dashboards with key indicators for all the projects in a portfolio. Meanwhile, at the lowest layer, the core project team needs the most detailed information in the form of progress reports on individual tasks. This approach keeps people from being overwhelmed with information they don’t really need. At the same time, it does not preclude any stakeholder from seeing the most detailed information, especially if it’s available through a virtual project portal (pers. comm., August 17, 2016). The decisions you make about what monitoring and controlling information to share with a particular audience are similar to the decisions you make about sharing schedules. In both cases, you need to keep in mind that your stakeholders’ attention is valuable. To put it in Lean terminology, attention is a wasteable resource (Huber and Reiser 2003). You don’t want to waste it by forcing stakeholders to wade through unnecessary data. Remember that the goal of monitoring and controlling information is to prompt stakeholders to respond to potential problems. In other words, you want to make it easy for stakeholders to translate the information you provide into action. 11.4 A Note About Dashboards A well-designed dashboard can be extremely useful, greatly minimizing the time required to put reports together. If the data is live—that is, updated continually—stakeholders can get updates instantaneously, instead of waiting for monthly project review meetings. Even a dashboard that is merely updated daily, or even weekly, can prevent the waste and delays that arise when people are working with outdated information. In his book Project Management Metrics, KPIs, and Dashboards: A Guide to Measuring and Monitoring Project Performance, Harold Kerzner discusses the importance of presenting monitoring information in a way that allows stakeholders to make timely decisions: The ultimate purpose of metrics and dashboards is not to provide more information but to provide the right information to the right person at the right time, using the correct media and in a cost-effective manner…. Today, everyone seems concerned about information overload. Unfortunately, the real issue is non-information overload. In other words, there are too many useless reports that cannot easily be read and that provide readers with too much information, much of which may have no relevance. It simply distracts us from the real issues…. Insufficient or ineffective metrics prevent us from understanding what decisions really need to be made. (2013, vii) A well-designed dashboard is an excellent tool for presenting just the right amount of information about project performance. The key to effective dashboards is identifying which dashboard elements are most helpful to your particular audience. Start by thinking about what those people need to focus on. For a given project, the same dashboard might not work for all groups. The dashboard you use to report to high-level managers might not be useful for people actually working on the project. Generally speaking, a dashboard should include only the information the intended audience needs to keep the project on track. A dashboard also helps senior managers evaluate different projects in their portfolio. They can quickly assess what’s working, what’s not working, and where they might provide assistance. In a two-part series for BrightPoint Consulting, a firm that specializes in data visualization, Tom Gonzalez explains how to create effective dashboards by focusing on key performance indicators (KPI), which are metrics associated with specific targets (Gonzalez). You can download his series on dashboards here: www.brightpointinc.com/data-visualization-articles/. To learn more about KPIs, see this extremely helpful white paper, also by Tom Gonzalez: www.brightpointinc.com/download/key-performace-indicators/. Figure 11-1 provides an example of an effective dashboard. It is simple, and easy to read, and focuses on a few KPIs. Figure 11-1: A simple, easy-to-read dashboard For a dashboard to be really useful, it’s essential that all stakeholders share the same definitions of common metrics such as “high,” “medium,” and “low.” Likewise, everyone has to understand the specific meaning of the colors used in any color coded system. To learn more about designing effective dashboards, see Chapter 6 of Kerzner’s book. For some tips on best practices for dashboards, take a look at this web site: https://www.targetdashboard.com/site/kpi-dashboard-best-practice/default.aspx#KPI-Dashboard-Design. Of course a dashboard is only one part of a monitoring system. It allows you to see what’s going on in the present. As a project manager, you have to balance looking at the dashboard with looking through the front window to see where the project is headed, while occasionally checking your rearview mirror to see if you might have missed something important. Beyond the Status Report According to Dave Pagenkopf, Applications Development and Integration Director for DoIT at the UW-Madison, one effective form of monitoring in IT projects is asking team members to demonstrate their work: For software projects where I am the sponsor or a key decision maker, I ask for product demonstrations before going through status reports. Demonstration of working software or the lack thereof tells me more about progress than any status report could. I have been known to take a quick tour of a data center when a team says they have finished installing servers. I have a large monitor in my office, so people can show me working software during meetings. As a general rule, in IT, the best performers always want to show what they have done. The poor performers want to talk about what they have done. Another form of monitoring IT projects is simply taking a close look at the programmers. During marathon projects, when everyone is working nonstop, I look for signs of unspoken exhaustion that will inevitably lead to problems. Those usually show up first as changes in grooming habits, which I notice as I walk through the office. (pers. comm. November 22, 2017) That last suggestion is an example of managing by walking around (MBWA)—a management style that emphasizes unplanned encounters with team members, and spontaneous, informal reviews of equipment and ongoing work. Sometimes a two-minute conversation with a team member will tell you more about the health of a project than piles of status reports. MBWA was first popularized in the 1980’s by Tom Peters and Robert H. Waterman in their book In Search of Excellence. You can read more about MBWA here: https://www.cleverism.com/management-by-walking-around-mbwa/. 11.5 Informed Intuition At some point in your career, you’ll find your intuition telling you one thing, while the monitoring data you have so laboriously collected tells you something else. For example, a recently updated schedule and a newly calculated budget-to-completion total might tell you a project is humming along as expected and that everything will finish on time and under budget. But still, you get a feeling that something is amiss. Maybe a customer’s tone of voice suggests unhappiness with the scope of the project. Or perhaps a product designer’s third sick day in a week makes you think she’s about to take a job with a different company, leaving you high and dry. Or maybe the sight of unopened light fixtures stacked in a corner at a building site makes you wonder if the electricians really are working as fast as status reports indicate. Monitoring Quality, Including Compliance When it comes to monitoring and control, project managers tend to focus on budget and schedule. But it’s also essential to monitor quality. For example, does the concrete used in a building project match the required standards? In an IT project, is the software free of bugs? Sometimes monitoring quality involves ensuring regulatory compliance, including meeting standards on how you conduct your project. Major corporations spend many millions of dollars each year on compliance programs designed to ensure that they follow the law, including the host of government regulations that apply to a typical organization. The ultimate goal of any compliance program is to prevent employees from breaking the law and, ideally, to encourage ethical behavior. At times like these, you might be tempted to take action based solely on gut instinct. But as discussed in Lesson 2, that kind of unexamined decision-making leaves you vulnerable to the errors in thinking known as cognitive biases. For instance, suppose you’ve been working with Vendor A for several months, always with good results. Then, at a conference, you hear about Vendor B, a company many of your colleagues seem to like. You might think you’re following a simple gut instinct when you suddenly decide to switch from a Vendor A to Vendor B, when in fact your decision is driven by the groupthink cognitive bias, which causes people to adopt a belief because a significant number of other people already hold that belief. In an article for the Harvard Business Review, Eric Bonabeau discusses the dangers of relying on pure intuition, or gut instincts: Intuition has its place in decision making—you should not ignore your instincts any more than you should ignore your conscience—but anyone who thinks that intuition is a substitute for reason is indulging in a risky delusion. Detached from rigorous analysis, intuition is a fickle and undependable guide—it is as likely to lead to disaster as to success. And while some have argued that intuition becomes more valuable in highly complex and changeable environments, the opposite is actually true. The more options you have to evaluate, the more data you have to weigh, and the more unprecedented the challenges you face, the less you should rely on instinct and the more on reason and analysis. (Bonabeau 2003) As Bonabeau suggests, you don’t want to detach intuition from analysis. Instead, you want your intuition to spur you on to seek more and better information, so you can find out what’s really going on. You can then make a decision based on informed intuition—a combination of information and instinctive understanding. You develop it through experience and by constantly learning about your individual projects, your teammates, your organization, and your industry. It can allow you to spot trouble before less experienced and less informed colleagues. According to cognitive psychologist Gary Klein, this kind of instinctive understanding is really a matter of using past experience to determine if a particular situation is similar to or different from past situations. This analysis occurs so fast it seems to exist outside of rational thought, but is in fact supremely rational. By studying firefighters in do-or-die situations, Klein developed a new understanding of this form of thought: Over time, as firefighters accumulate a storehouse of experiences, they subconsciously categorize fires according to how they should react to them. They create one mental catalog for fires that call for a search and rescue and another one for fires that require an interior attack. Then they race through their memories in a hyperdrive search to find a prototypical fire that resembles the fire that they are confronting. As soon as they recognize the right match, they swing into action. Thought of this way, intuition is really a matter of learning how to see—of looking for cues or patterns that ultimately show you what to do. (Breen 2000) Klein doesn’t use the term informed intuition, but that’s what he’s talking about. Informed intuition is a matter of learning how to see, so you can analyze a situation in an instant and take the necessary action. That’s definitely something to aspire to as you proceed through your project management career. 11.6 The Illusion of Linearity The best monitoring data in the world is useless if you lack the ability to interpret it correctly. One of the most common interpretation errors is assuming the relationship between two things is linear when it is in fact nonlinear. Numerous studies in cognitive psychology have shown that humans have a hard time grasping nonlinear systems, where the relationship between cause and effect is uncertain. A cognitive bias in favor of linearity makes us naturally predisposed to perceive simple, direct relationships between things, when in reality more complex forces are at play. Bart de Langhe, Stefano Puntoni, and Richard Larrick explain the perils of linear thinking in a nonlinear world in this classic article for the Harvard Business Review: https://hbr.org/2017/05/linear-thinking-in-a-nonlinear-world. For example, marketing forecasts often assume a linear relationship between consumer attitudes and behavior, when in fact things are much more complicated. One study focused on the relationship between consumers’ stated preference for organic products and the same consumers’ actual behavior. You might think that someone with a strong preference for organic products would buy more organic vegetables than someone with a less strong preference for organic products. You might be surprised to learn that this is not the case, because the relationship between consumer attitudes and behavior is nonlinear (van Doorn, Verhoef and Bijmolt 2007). Project managers fall prey to the linearity bias frequently, especially when it comes to the relationship between time and the many elements of a project. Because time is shown on the x-axis in Microsoft Project, we make the mistake of thinking that individual tasks will be completed in one linear stream of accomplishment. In reality, however, the relationship graph may take the form of a curve or a step function. Failure to grasp this means that any attempts to monitor and control a project are founded on incorrect assumptions, and therefore doomed to failure. In addition to muddying your understanding of cause and effect, the linearity bias can cause you to confuse activity with accomplishment. But just because people are bustling around the office does not mean they are actually getting anything done. Think of the kind of unfocused activity that often occurs as you’re getting ready to move from one home to the next. You might spend some time sorting kitchen utensils until you get distracted by alphabetizing your CD collection before you pack it away in boxes. Then, suddenly, the movers show up, and you kick into gear. In one hour, you might accomplish more than in the previous three days. A graph of your accomplishments during the move might look like the step function shown in Figure 11-2, with very little of importance actually being accomplished, followed by a great deal being accomplished. Figure 11-2: Productivity often takes the form of a step function; here, the process of packing up to move begins with very little being accomplished, followed by the movers showing up, at which point a great deal is accomplished As a project manager, you need to make your monitoring measures factor in the nonlinearity of resource use. Resource expenditures are often low at first. As a result, an inexperienced project manager might be lulled into thinking she is working with a linear system, in which resource expenditures will continue at the same rate throughout the project. In most projects, however, most of the resources are used up near the end of the project. Suddenly, the slope of the graph illustrating resource use over time takes a vertical turn, as in Figure 11-3. Figure 11-3: Resource consumption can seem linear and then change dramatically; here, resources are consumed at a linear rate of 1% per week through week 10, followed by a sudden uptick in weeks 10-13 Note that the step function model of productivity applies to most Agile projects. Productivity is zero until the team can demonstrate that they have created a working feature, at which point the productivity graph takes a step up. Ideally, each sprint causes another step up, but if the client is not satisfied with the outcome of a particular sprint, productivity stays flat until the end of the next sprint. ~Practical Tips Here are a few practical tips related to monitoring and controlling: • Keep your audience in mind: When presenting monitoring information to stakeholders, always keep the audience in mind. When you are communicating with executives, a high-level summary is most useful. When communicating with the people who are actually implementing the project, more detail will be required. • Make sure stakeholders can deal with bad news: A monitoring system is only useful if team members are willing and able to respond to the news it provides about project performance, especially when it suggests the existence of serious problems. Make sure everyone on the project team is willing to identify bad news and deal with it as early as possible. • Look at the bigger picture: Vital monitoring information sometimes comes from beyond the immediate project. Weather, personnel issues, and the economy can all affect what you hope to accomplish. Take care not to get so focused on the details of incoming monitoring information that you miss the big bigger picture. If that’s not your strong suit, remember to check in with team members who are good at seeing the big picture. Understanding what’s happening in the regional, national, and global economy, for instance, might help you manage your project. • Simplify: A few key metrics are better than too many metrics, which may be confusing and contradictory. • Pay attention to non-quantitative measures: Client satisfaction, changes in market preferences, public perceptions about the project, the physical state of team members (Do they appear rested and groomed as usual?), and other non-quantitative measures can tell you a lot about the health of your project and are worth monitoring. • Be alert for bias in data collection: Make sure your monitoring systems give you an objective picture of the current state of your project. • Be mindful of the effect of contracts on monitoring and controlling efforts: The type of contract governing a project can affect the amount and type of monitoring and controlling employed throughout a project. In a time and material contract, where you get paid for what you do, a contractor will carefully monitor effort because that is the basis of payment. They might not be motivated to control effort because the more they use, the more they are paid. With a lump sum contract, the contractor will be highly motivated to monitor and control effort because compensation is fixed and profit depends largely on effective control. • Be sure to communicate key accomplishments, next steps, and risk items: When reading monitoring reports, managers are often looking for just enough information about the project to allow them to feel connected and to allow them to report to the next level up in management. You can make this easier for them by including in your reports a list of deliverables from the last thirty days, a list of what’s expected in the next thirty days, and risks they need to be mindful of. Finally, here are additional helpful suggestions from Gary Whited (2014): • Collect actionable information: Focus monitoring efforts on information that is actionable. That is, the information you collect should allow you to make changes and stay on schedule/budget. • Keep it simple: Don’t set up monitoring and controlling systems that are so complicated you can’t zero in on what’s important. Simplicity is better. Focus on measures that are key to project performance. • Collect valuable data, not easy-to-collect data: Don’t fall into the trap of focusing on data that is easy to collect, rather than on data that is tied to an actual benefit or value. • Avoid unhelpful measures: Avoid measures that have unnecessary precision, that draw on unreliable information, or that cause excessive work without a corresponding benefit. • Focus on changeable data: Take care not to over-emphasize measures that have little probability of changing between periods. ~Summary • Project monitoring and controlling, which occurs simultaneously with execution, involves reconciling “projected performance stated in your planning documentation with your team’s actual performance” and making changes where necessary to get your project back on track (Peterman 2016). The best monitoring and controlling system encourages active control, which involves: 1) controlling what you can by making sure you understand what’s important, taking meaningful measurements, and building an effective team focused on project success; and 2) adapting to what you can’t control through early detection and proactive intervention. • The type of monitoring that works for one project might not work for another, even if if both projects seem similar. Exactly which items you need to monitor will vary from project to project, and from one industry to another. But in any industry, you usually only need to monitor a handful of metrics. As a general rule, you should measure as often as you need to make meaningful course corrections. • You can prevent information overload by shaping monitoring data into electronic, at-a-glance dashboards that collate vital statistics about a project, and reports that contain only the information your audience needs. Always tailor the amount of detail, the perspective, and the format of information in a report to the specific stakeholders who will be consuming it. • A well-designed dashboard is an excellent tool for presenting just the right amount of information about project performance. The key to effective dashboards is identifying which dashboard elements are most helpful to your particular audience. • Gut instinct, or pure intuition, can make you vulnerable to the errors caused by cognitive biases. You’ll get better results by linking intuition to analysis and learning. The result, informed intuition, is a combination of information and instinctive understanding acquired through learning and experience. • The linearity bias—a cognitive bias that causes people to perceive direct linear relationships between things that actually have more complex connections—can make it hard to interpret monitoring data correctly. • Compliance programs—which focus on ensuring that organizations and their employees adhere to government regulations, follow all other laws, and behave ethically—require the same kind of careful monitoring and controlling as any organizational endeavor. ~Glossary active control—A focused form of project control that involves the following: 1) controlling what you can by making sure you understand what’s important, taking meaningful measurements, and building an effective team focused on project success; and 2) adapting to what you can’t control through early detection and proactive intervention. compliance program—A formalized program designed to ensure that an organization and its employees adhere to government regulations, follow all other laws, and behave ethically. controlling—In the monitoring and controlling phase of project management, the process of making changes in response to data generated by monitoring tools and methods to avoid missing major milestones. earned value management (EVM)—An effective method of measuring past project performance and predicting future performance by calculating variances between the planned value of a project at a particular point and the actual value. informed intuition—A combination of information and instinctive understanding. You develop informed intuition through experience and by constantly learning about your individual projects, your teammates, your organization, and your industry. key performance indicator (KPI)—A metric associated with a specific target (Gonzalez). linearity bias—A cognitive bias that causes people to perceive direct, linear relationships between things that actually have more complex connections. managing by walking around—A management style that emphasizes unplanned encounters with team members, and spontaneous, informal reviews of equipment and ongoing work. monitoring—In the monitoring and controlling phase of project management, the process of collecting progress data and sharing it with the people who need to see it in a way that allows them to understand and respond to it. monitoring and controlling—The process of reconciling “projected performance stated in your planning documentation with your team’s actual performance” and making changes where necessary to get your project back on track (Peterman 2016). Monitoring and controlling occurs simultaneously with execution.
textbooks/biz/Management/Book%3A_Technical_Project_Management_in_Living_and_Geometric_Order_(Russell_Pferdehirt_and_Nelson)/1.11%3A_Project_Monitoring_Analytics_and_Control.txt
Experts often possess more data than judgment. —Colin Powell, Secretary of State (Harari 2003) Objectives After reading this chapter, you will be able to • Discuss the importance of getting the fundamentals right and keeping them right throughout a project • Explain the value of project reviews and audits • Describe issues related to correcting course mid-project and decisions about terminating a project • Discuss the project closure phase The Big Ideas in this Lesson • Many little things can go wrong in a project, but as long as you get the fundamentals right and keep them on target, a project is likely to achieve substantial success. However, just because you have the fundamentals right at the beginning of a project doesn’t mean they’ll stay that way. • Throughout the life of a project, you need to stop, look, and listen, and adjust course as necessary. Focus more on staying flexible than seeking accountability for every little thing that goes wrong in a project. • By conducting regular, careful periodic reviews, you increase the chance of detecting strategic inflection points in your projects earlier enough to allow you time to adapt and adjust. 12.1 Getting the Fundamentals Right The Toyota Way to Stop, Look, and Listen A key principle of the famously Lean Toyota Production System is genchi genbutsu, which means “go and see for yourself.” In other words, if you really want to know what’s going on in a project, you need to actually go to where your team is working, and then watch and listen. This idea is predicated on the fact that “when information is passed around within organizations it is inevitably simplified and generalized. The only real way to understand a problem is to go and see it on the ground” (The Economist 2009). You can learn how Yuji Yokoya, a Toyota engineer, used genchi genbutsu (in the form of a 53,000-mile drive across North America), to plan a redesign of the Toyota Sienna: https://www.forbes.com/forbes/2003/0217/056a.html#1c660d4575d6. Many little things can go wrong in a project, but as long as you get the fundamentals right and keep them on target, a project is likely to achieve substantial success. As you’ve learned throughout this book, the best way to get the fundamentals right is to collaborate with stakeholders to create a comprehensive, realistic plan, while also remaining adaptable to the inevitable living order changes that will come your way. But just because you have the big things right at the beginning of a project doesn’t mean they’ll stay that way. Throughout the life of a project, you need to stop, look, and listen. That is, you need to stop periodically to conduct mid-project reviews/audits; look at the data about scope, quality, and schedule; and listen to the words of team members. Regular stop-look-and-listen breaks will provide essential insights into the current state of your project, and its prospects for the future. Even if you’re working on a project that seems identical to others you’ve worked on in the past, you need to stay alert to the possibility that the ground could suddenly shift beneath your feet. And the only way to know if that’s happening is to regularly stop, look and listen. Or to use the words of Andrew Grove, CEO of Intel from 1997 to 2005, you need to maintain a “guardian attitude” toward all your projects, cultivating a constant level of paranoia about what you might not know about your project (1999, 3). In particular, Grove argues, you need to be paranoid about strategic inflection points, which can upend even the best laid plans. In his book Only the Paranoid Survive, he explains the dangers strategic inflection points pose to entire organizations, although much of what he says can apply equally well to individual projects: A strategic inflection point is a time in the life of a business when its fundamentals are about to change. That change can mean an opportunity to rise to new heights. But it may just as likely signal the beginning of the end. Strategic inflection points can be caused by technological change but they are more than technological change. They can be caused by competitors but they are more than just competition. They are full-scale changes in the way business is conducted, so that simply adopting new technology or fighting the competition as you used to may be insufficient. They build up force so insidiously that you may have a hard time even putting a finger on what has changed, yet you know that something has. Let’s not mince words: A strategic inflection point can be deadly when unattended to. Companies that begin a decline as a result of its changes rarely recover their previous greatness. But strategic inflection points do not always lead to disaster. When the way business is being conducted changes, it creates opportunities for players who are adept at operating in the new way. This can apply to newcomers or to incumbents, for whom a strategic inflection point may mean an opportunity for a new period of growth. (Grove 1999, 3-4) Drawing on his many years of experience in the semiconductor business, Grove argues that the people best positioned to detect strategic inflection points are middle managers: In middle management, you may very well sense the shifting winds on your face before the company as a whole and sometimes before your senior management does. Middle managers—especially those who deal with the outside world, like people in sales—are often the first to realize that what worked before doesn’t quite work anymore; that the rules are changing. They usually don’t have an easy time explaining it to senior management, so the senior management in a company is sometimes late to realize that the world is changing on them—and the leader is often the last of all to know. (Grove 1999, 21-22) The Power of Checklists You might occasionally hear people dismiss an audit as a checklist exercise in which project managers work their way through a list of items by rote, with no attempt to make decisions based on experience and judgement. However, a judicious use of checklists can be highly beneficial during any auditing and review process. Atul Gawande, has written extensively on checklists used by skilled professionals, such as surgeons and airline pilots. In his books The Checklist Manifesto and Better, he illustrates the power of this simple tool. This New Yorker article by Gawande is a good introduction to the topic: http://www.newyorker.com/magazine/2007/12/10/the-checklist. As a project manager, the best way for you to sense those shifting winds is to practice regular stop-look-and-listen breaks. You might detect strategic inflection points in your industry—and if so, you can use what you’ve learned to make your case to upper management. But the fact is you are more likely to detect strategic inflection points in your individual projects, which, taking our inspiration from Grove, we define here as a time in the life of a project when its fundamentals are about to change. Your goal, as a project manager, is to detect strategic inflection points in your projects earlier enough to allow you time to adapt and adjust. The best way to make sure that happens is to conduct regular project audits. 12.2 Auditing: The Good, the Bad, the Ugly To stay in good health, it’s important to monitor some basics every day, perhaps by checking your weight or wearing a fitness monitor to make sure you get enough exercise. But sometimes you need to schedule a full workup to get external insights from a knowledgeable medical professional. The same is true of technical projects. Even if you have implemented reliable monitoring systems designed to alert you to any serious problems, as recommended in Lesson 11, every now and then you need to dive deeper into your project via an audit so that you can learn everything you need to know—the good, the bad, the ugly, and the unexpected. So what exactly is a project audit? It is a deep investigation into any or all aspects of a project, with the aim of enabling stakeholders to make fully informed decisions about the project’s future. An audit can provide a focused, objective review of part or all of a project. Scrum Retrospective The idea of an audit is built into Scrum, the most popular form of Agile software development. This pause in development, known as a retrospective, “is an opportunity for the Scrum Team to inspect itself and create a plan for improvements to be enacted during the next Sprint” (Scrum.org n.d.). Like any group critique, retrospectives can be contentious, and are often not handled well. You can learn more about how to engineer a helpful retrospective here: https://www.scrum.org/resources/what-is-a-sprint-retrospective. Audits can be relatively informal or formal. An informal audit is a relatively quick evaluation of a project, as when a new project manager attempts to take stock of a project by talking to everyone involved, and trying to learn as much as possible about the project objectives. A formal audit is more systematic, and is typically conducted by someone external to the project, or even, depending on the scope of the audit, external to the organization. The ultimate goal of any audit is to generate actionable intelligence that can be used to improve the project or, when necessary, justify shutting it down. This intelligence is usually presented in the form of an audit report, which typically contains an explanation of the context of the audit, including the overall focus or any important issues; an analysis of data, interviews, and related research compiled during the audit; action-oriented recommendations; and, in some cases, lessons learned and possibly one or more supporting appendices. In some organizations, audits or formal project reviews are conducted at the end of certain phases to determine if the project is worth continuing or if the project plan requires significant changes before the team moves forward. An audit can be used to • Review all projects meeting certain criteria (size, risk, client, regulations, etc.) • Revalidate the business feasibility of a project • Reassure upper management that a project is viable • Reconfirm upper management support for the project • Confirm readiness to move to the next project phase • Investigate specific problems to determine the next step • Verify market conditions Issues that could be addressed in an audit include • Project rationale: Why was the project selected in the first place? Is that rationale still valid? • Project’s role in the organization’s priorities: As markets change, requirements for projects also change. Have recent changes lessened or increased the project’s priority? Do you need to end the project entirely or should you add more resources in order to finish it more quickly? • Team status: Is the project team functioning well and appropriately staffed? • External factors affecting the project’s direction and importance: Have new regulations, competing products, or technology altered the playing field? • Budget and schedule: It’s important to get accurate data on the current status of the budget and schedule, and check on the reasonableness of projections at completion. An independent reviewer can sometimes turn up previously unperceived issues regarding these two essential items. • Performance of contractors: How’s the quality of their work? Are they on schedule? Are their budget projections in line with reality? Checking in With the Team In addition to auditing individual projects, it’s a good idea to conduct regular audits of your team. And be sure to include yourself, as the project manager, in the audit. Few people like being formally evaluated in their work, but you can minimize the negative feelings by conducting team audits often and routinely, so people see them as simply part of their job, and not as a targeted attempt to undermine them. Brian Price (see “From the Trenches,” later in this lesson) has the following suggestions for anyone conducting a team audit: • Begin by asking the individual to evaluate his or her own performance. • Avoid drawing comparisons with other team members; rather, assess the individual in terms of established standards and expectations. • Focus criticism on specific behaviors rather than on the individual personally. • Be consistent and fair in your treatment of all team members. • Treat the review as one point in an ongoing process. (2007) Anonymous surveys of the team are one way to conduct a team audit. This article from Slate describes a survey app that works similar to a dating app, allowing people to swipe left or right to rate their own performance, as well as the performance of team members and their managers: http://www.slate.com/articles/business/the_ladder/2016/06/can_new_app_tinypulse_disrupt_performance_reviews.html. Whatever survey method you choose, make sure your team sees you use the information obtained from the survey to improve the team’s performance. Otherwise they’ll lose confidence in future team audits, and in you as a project manager. Team audits can address individual “burn-out” issues and help not only individual performance but also team retention. Different organizations have different auditing procedures, but the heart of any audit is listening to the opinions of the people involved in the project via interviews or surveys. According to Todd C. Williams, author of Rescue the Problem Project, People are the critical piece in determining a project’s success or failure. They approve the inception, allow scope creep, define the technical solution, and levy constraints. What are the team’s dynamics? Who are the sponsors? What are their expectations? What is the leadership’s strength? What do these people think is wrong with the project? Does the team have the right skills? What would the team do to fix the project? The answers to these questions lead to more questions and eventually point to the root problems…. In other words, team members know the problems and their accompanying resolutions; someone just needs to ask them. Therefore, the people involved in a project are the best place to start an audit. (2011, 35) The Pull Value of an Audit Audits provide an excellent opportunity to learn and assess. They also provide a safe opportunity to ask the question: Should we continue this project (with or without modifications) or should we terminate it? When conducted routinely—and always with the guardian attitude recommended by Andrew Grove—they allow for a periodic timeout in which the team steps back to view their progress from a higher perspective, focusing on quality, schedule, cost, resources, and generally viability. An audit should result in some kind of report summarizing the audit findings, but generally speaking, you should avoid viewing an audit as an opportunity for excessive documentation of the past. Instead, think of an audit as an opportunity to pull from the desired ends of the project to the current state, asking some essential questions: • What has to happen next to best assure success in reaching the desired end state, allowing us to deliver the promised value? • Is the next phase in the project worth the required investment? Take the Sensitive Approach Organizations vary in their approach to audits. Some conduct audits routinely on all major projects. Others reserve audits for projects that appear to be heading for trouble. In other organizations, audits are conducted routinely only for certain types of projects. Whatever approach your organization takes, it’s essential to structure and conduct an audit in a manner appropriate to the project and to the people and organizations involved. In particular, you need to be sensitive to the culture of the project team itself, so as not to alienate the people you will be relying on to give you accurate information about the project. Organizations that conduct regular, structured assessments of all projects tend to create a safer, more open environment for meaningful, helpful project reviews and associated follow-up action. Sometimes even simply using the term “project review” instead of “audit” can make the activity seem less threatening. Also, if reviews are conducted for all projects, project managers and team members are less likely to feel under attack during a project audit, as they understand this is part of business as usual. This can build a culture that values open, frank review, discussion, and collaborative problem-solving. Make sure stakeholders see the audit as an attempt to learn about the project, rather than a blame-seeking investigation. A professional, systematic approach, in which you listen carefully and respectfully to all parties, will go a long way toward calming anxious participants. The more informed people are about the planning and delivery of an audit, and the more opportunities they have to offer input, the more helpful the audit results will be. Characteristics of an Effective Audit Leader • No direct involvement or direct interest in the project • Respect (perceived as impartial and fair) of senior management and other project stakeholders • Willingness to listen • Independence and authority to report audit results without fear of recriminations from special interests • Perceived as having the best interests of the organization in making decisions • Broad-based experience in the organization or industry A project auditor is the person responsible for leading an audit or review. Ideally, the project auditor is an outsider who is perceived by all stakeholders to be fair and objective. He or she should have excellent listening skills and broad-based knowledge of the organization or industry. It’s helpful to use an audit team consisting of peers from other projects. This can help ensure that the team under review feels that their auditors understand the constraints they face in executing the project; they’ll engage with the auditing team as peers, rather than a critical body. The project teams can return the favor by critiquing the auditors’ project at another stage. If an audit is undertaken by an external party, it is important that the audit team is respected by the team under review. This helps diffuse any feelings of being unfairly criticized. The Right Person for the Job An important key to a successful audit is an audit leader who is trusted and respected by all stakeholders, who is believed to have the best interests of the organization at heart, and who has broad-based experience in the industry. In some situations, to avoid the appearance of a conflict of interest, it’s best to choose as the auditor an impartial person, with no direct involvement in the project. As Michael Stanleigh explains, an auditor who is unconnected to the project makes it possible for team members and other stakeholders to be completely candid: They know that their input will be valued and the final report will not identify individual names, rather it will only include facts. It is common that individuals interviewed during the project audit of a particularly badly managed project will find speaking with an outside facilitator provides them with the opportunity to express their emotions and feelings about their involvement in the project and/or the impact the project has had on them. This “venting” is an important part of the overall audit. (n.d.) However, to avoid the appearance that the point of the audit is designed to catch the team doing something wrong, sometimes it’s better to allow the team to review itself. This approach can encourage people to step forward to share what they’ve learned about the project, both good and bad. After all, the ultimate point of an audit or project review is to help the organization learn about the project. Because an audit is primarily a learning experience, the ideal audit leader has the ability to listen to what other people are saying, as well as to what they are not saying, looking beneath the surface for hidden currents that are shaping the project’s performance. The audit leader should then be able to weave all the information obtained in the audit into a coherent picture of the project’s current status and future prospects. In addition to these formidable personal requirements, an audit leader should be granted the ability to operate independently, with the authority to report audit results without fear of recrimination. He or she has to be willing to deliver bad news if necessary, and must have an appropriate forum to do so—whether in formal reports, presentations, or emails. Fail Fast Terminating a project is hard, especially if the team is emotionally and professional invested in the project. Adopting a fast fail methodology, especially for high risk projects, can normalize project termination, making it easier for a project team to pull the plug when necessary. This article discusses the success reaped by business leaders who weren’t afraid to confront their own failures: http://www.newyorker.com/business/currency/fail-fast-fail-often-fail-everywhere. Todd C. Williams uses the term recovery manager to refer to a consultant who is brought in from the outside to audit a failing project, and, if possible, steer it to a successful conclusion. In his view, selecting the right recovery manager is the essential first step: Selecting the right recovery manager is critical. Avoid choosing someone currently involved with the project, as people involved in the project are too close to see the issues and may be perceived as biased by the stakeholders. At a minimum, the person doing the audit should be someone outside the extended project and unassociated with the product. An objective view is critical to a proper audit and reducing any preconceptions of a solution. The ideal candidate is a seasoned, objective project manager who is external to the supplier and customer, has recovery experience, and a strong technical background (for the conversations with the technical team). Compare this with hiring a financial auditor. No one would ever recommend engaging someone internal or with no experience, as it would create too high a chance of someone not believing the audit results…. Above all, recovery managers need to be honest brokers—objectivity is paramount. They cannot have allegiance to either side of the project. (17-19) 12.3 Correcting Course or Shutting a Project Down First, Admit You have a Problem In his book Rescue the Problem Project, Todd C. Williams shares what he’s learned as a professional “rescue manager” who traveled the world, applying his expertise to help turn around endangered projects in several industries. When he shows up on the scene, his first goal is to get project stakeholders to acknowledge the existence of a problem in the first place: “All afflictions, from everyday ailments to addictions, have one thing in common—if people choose to ignore them, they remain untreated. Therefore, before you start any process, you must admit there is a problem. Without admitting a problem exists and committing to resolve it, the problem will continue. It may morph and manifest itself in a new way, but it still exists” (15). Based on the audit’s findings, the team could decide to proceed per current plans, revise the plan (i.e., tasks and sequence), revise the schedule, revise the budget, revise the scope, bring in new team members or remove team members, or terminate the project. A project audit can also investigate whether a team is adding or losing members too frequently, causing the project to veer from one goal to another. Sometimes only a few quick, easy-to-implement course corrections are required. But if large-scale changes are necessary, you will need to agree on a change management strategy that will minimize resistance to the necessary alterations “through the involvement of key players and stakeholders” (Business Dictionary n.d.). Resistance to any kind of change is often driven by fear. And as Vijay Govindarajan and Hylke Faber explain, we are never our best selves when we are afraid: When we’re in the grip of our fears, we are at least 25 times less intelligent than we are at our best. We don’t think straight. And we’ll most likely reject anything that takes us out of our comfort zone. This reaction is well known today as the “amygdala hijack.” It’s when our more primitive, or “crocodilian” brain wired for survival takes over. When our crocodiles are active, we are resistant to change and are operating from a fear of survival. Our crocodiles are trying to keep us safe, at the cost of innovation and change. (2016) Govindarajan and Faber argue that the best way to drain fear of its power is to speak in a straightforward and matter-of-fact way about team members’ anxieties. They also recommend using humor when appropriate, and projecting an aura of confidence and courage. The Fine Art of Decision-Making Successfully correcting course in a project presumes that you and your team are effective decision-makers. So you’d be wise to learn all you can about decision-making throughout your career. Here are a few resources to help you get started: • Decisive: How to Make Better Choices in Life and Work by Chip and Dan Heath—An introduction to basic research on decision-making, with pointers on how to make better choices. • Smart Choices: A Practical Guide to Making Better Decisions by John S. Hammon, Ralph L. Keeney, and Howard Raiffa—A more analytical approach to decision-making that emphasizes establishing a useful process that “gets you to the best solution with a minimal loss of time, energy, money, and composure” (2015, 3). • How to Make Decisions: Making the Best Possible Choices—A quick overview of helpful decision-making strategies: https://www.mindtools.com/pages/article/newTED_00.htm. • Deciding How to Decide: An evaluation of useful decision-making tools, with suggestions on how to choose the right tool for a particular decision: https://hbr.org/2013/11/deciding-how-to-decide. If an audit reveals the painful truth that it’s time to terminate a project, then it’s important to realize that this is not necessarily a bad thing: Canceling a project may seem like a failure, but for a project to be successful, it must provide value to all parties. The best value is to minimize the project’s overall negative impact on all parties in terms of both time and money. If the only option is to proceed with a scaled-down project, one that delivers late, or one that costs significantly more, the result may be worse than canceling the project. It may be more prudent to invest the time and resources on an alternate endeavor or to reconstitute the project in the future using a different team and revised parameters. (Williams, 8) When considering terminating a project, it’s helpful to ask the following questions: • Has the project been made obsolete or less valuable by technical advances? For instance, this might be the case if you’re developing a new cell phone and a competitor releases new technology that makes your product undesirable. • Given progress to date, updated costs to complete, and the expected value of the project’s output, is continuation still cost-effective? Calculations about a project’s cost-effectiveness can change over time. What’s true at the beginning of the project may not be true a few months later. This is often the case with IT projects, where final costs are often higher than expected. • Is it time to integrate the project into regular operations? For example, an IT project that involves rolling out a new network system will typically be integrated into regular operations once network users have transitioned to the new system. • Are there better alternative uses for the funds, time, and personnel devoted to the project? As you learned in in Lesson 2, on project selection, the key to successful portfolio management is using scarce resources wisely. This involves making hard choices about the relative benefits of individual projects. This might be an especially important concern in the case of a merger, when an organization has to evaluate competing projects and determine which best serve the organization’s larger goals. • Has a strategic inflection point, caused by a change in the market or regulatory requirements, altered the need for the project’s output? • Does anything else about the project suggest the existence of a strategic inflection point—and therefore a need to reconsider the project’s fundamental objectives? Determining whether to terminate a project can be a very difficult decision for people close to a project to make. As the Figure 12-1 illustrates, your perspective on a project has a huge effect on your judgment of its overall success. That is why a review conducted by an objective, external auditor can be so illuminating. Figure 12-1: Your definition of project success and failure depends on your perspective Source: Adapted from Figure 1-1, Rescue the Problem Project: A Complete Guide to Identifying, Preventing, and Recovering from Project Failure 12.4 Closing Out a Project Project closure is traditionally considered the final phase of a project. It includes tasks such as • Transferring deliverables to the customer • Cancelling supplier contracts • Reassigning staff, equipment, and other resources • Finalizing project documentation by adding an analysis summarizing the project’s ups and downs • Making the documentation accessible to other people in your organization as a reference for future projects • Holding a close-out meeting • Celebrating the completed project Seen from a geometric order perspective, these tasks do mark the definitive end of a project. However, in the broader, living order vision of a project’s life cycle, project closure often merely marks the conclusion of one stage and the transition to another stage of the project’s overall life cycle, as shown in Figure 12-2. Seen from this perspective, project closure is actually an extension of the learning and adjusting process that goes on throughout a project. This is true in virtually all industries, although the actual time it takes to cycle through from a plan to the idea for the next version can vary from weeks to years. The close-out meeting is an opportunity to end a project the way you started it—by getting the team together. During this important event, the team should review what went well, what didn’t go well, and identify areas for improvement. All of this should be summarized in the final close-out report. A final close-out meeting with the customer is also essential. This allows the organization to formally complete the project and lay the groundwork for potential future work. The close-out report provides a final summary of the project performance. It should include the following: • Summary of the project and deliverables • Data on performance related to schedule, cost, and quality • Summary of the final product, service, or project and how it supports the organization’s business goals • Risks encountered and how they were mitigated • Lessons learned Figure 12-2: Seen from a living order perspective, closure is an extension of the learning and adjusting process that goes on throughout a project Exactly where your work falls in the project’s life cycle depends on your perspective as to what constitutes “the project” in the first place. The designers and constructors of a building might consider the acceptance of the building by the owner as project closure. However, the results of the project—that is, the building—lives on. Another contractor might be hired later to modify the building or one of its systems, thus starting a new project limited to that work. If project closure is done thoughtfully and systematically, it can help ensure a smooth transition to the next stage of the project’s life cycle, or to subsequent related projects. A well-done project closure can also generate useful lessons learned that can have far-reaching ramifications for future projects and business sustainability. The closeout information at the end of a project should always form the basis of initial planning for any future, similar projects. Although most project managers spend time and resources on planning for project start-up, they tend to neglect the proper planning required for project closure. Ideally, project closure includes documentation of results, transferring responsibility, reassignment of personnel and other resources, closing out work orders, preparing for financial payments, and evaluating customer satisfaction. Of course, less complicated projects will require a less complicated close-out procedure. As with project audits, the smooth unfolding of the project closure phase depends to a great degree on the manager’s ability to handle personnel issues thoughtfully and sensitively. In large, on-going projects, the team may conduct phase closures at the end of significant phases in addition to a culminating project closure. 12.5 From the Trenches: Brian Price Brian Price, a graduate of the UW Master of Engineering in Professional Practice program (a precursor of the Masters in Engineering Management program), is the former chief power train engineer for Harley-Davidson. He teaches engine project management in the UW Master of Engineering in Engine Systems program. In his twenty-five years managing engine-related engineering projects, he had ample opportunity to see the benefits of good project closure procedures, and the harm caused by bad or non-existent project closure procedures. In his most recent role as a professor of engineering, he tries to encourage his students to understand the importance of ending projects systematically, with an emphasis on capturing wisdom gained throughout a project. Brian shared some particularly insightful thoughts on the topic in an interview: The hardest parts of any project are starting and stopping. Much of project management teaching is typically devoted to the difficulties involved in starting a project—developing a project plan, getting resources in place, putting together a team, and so on. But once a project is in motion, it gains momentum, taking on a life of its own, making it difficult to get people to stop work when the time comes. It therefore requires some discipline to get projects closed out in a structured way that ties up all the loose ends. Close out checklists can help. (For one example, see Figure 14.2 in Project Management: The Managerial Process, by Erik W. Laron and Clifford F. Gray.) The close-out also needs to wrap up final budgets and reallocate resources. Generally speaking, the end of a project is a perfect time to reflect on what went well and what could be done differently next time. The After Action Review (AAR) process, derived from military best practice, is very helpful. It focuses on three distinct, but related areas: 1. Project performance: Did it meet objectives? Was it done efficiently and effectively? 2. Team performance: How well did people work together? Were they stronger than the sum of their parts? 3. Individuals’ performances: How did individuals perform? This relates to their personal development. To learn more about the AAR process, see this in-depth explanation in the Harvard Business Review: https://hbr.org/2005/07/learning-in-the-thick-of-it. The reflections at the end of a project are a great opportunity to capture key learning, whether technical, managerial, or related to project execution. This can then be codified for dissemination and application on other projects. Continually building a knowledge base is essential for improving techniques and best practice. This never comes easy, as it can be seen as bureaucratic report writing, so as a project manager you will need to insist on it. Keep in mind that the point of building a knowledge base is not, of course, to improve the project you are closing out, but to improve the many as yet undetermined projects that lie ahead. Focus on what it took to deliver the project (time, resources, tasks, budgets, etc.) compared to the original plan. This information is essential in planning the next project. After all, the main reason projects fail is because they were inadequately planned, and the main reason they are inadequately planned is because the planners lacked complete planning information. Your best source of good planning information is wisdom gained from recent, similar projects. Thus, it is essential to capture and disseminate that information at the close of every project. Finally, don’t discount the importance of honoring the achievements of the project team. The project closure stage is a good time to build morale with an end-of-project celebration, especially when a close-knit team is about to be dispersed into other projects. People need a coherent conclusion to their work. Unfortunately, most organizations pay little attention to project closure. This is partly due to basic human psychology—people get excited by the next opportunity. They tend to drift off to the next interesting thing, and something new is always more interesting than something old. But a deeper problem is that organizations tend to be more interested in what the project is delivering, rather than the knowledge and wisdom that allows the company to deliver the project’s value. The real worth of an organization is the knowledge that allows it to continue generating value. For Harley Davidson, for example, that would be its collective knowledge of how to make motorcycles. A well-conducted project closure adds to that knowledge, transforming specific experience into wisdom that the organization can carry forward to future undertakings (2016). Failure: The Best Teacher In their book Becoming a Project Leader, Laufer et al. explain the importance of a tolerance for failure. Projects will occasionally close or radically change course, but that doesn’t mean that the team members who worked on such projects were ineffective. In fact, coping with such challenges can help individuals and teams be much more efficient. In his capacity as a project manager for the U.S. Air Force’s Joint Air-to-Surface Standoff Missile, Terry Little’s response to a failed missile launch was not to scold the contractor, Lockheed, for its failure but rather to ask how he could help. Larry Lawson, project manager at Lockheed, called Terry’s response “the defining moment for the project . . . . Teams are defined by how they react in adversity—and how their leaders react. The lessons learned by this team about how to respond to adversity enabled us to solve bigger challenges.” As Laufer et al. articulate, “By being a failure-tolerant leader, Terry Little was able to develop a culture of trust and commitment-based collaboration” (2018). ~Practical Tips Here are a few practical tips related to project audits and project closure: • Pair inexperienced personnel with pros: People become acutely aware of the loss of knowledge when people retire or move on for other reasons. If an organization lacks a systematic way to archive information, the hard-won knowledge gained through years of experience can walk out the door with the departing employee. To prevent such a loss of vital knowledge, consider pairing inexperienced engineers with older ones, so knowledge is transferred. As a project manager, this is one way you can help to capture knowledge for the good of your team and organization. • Interview team members or create video summaries: If you’re having a hard time getting team members to put their end-of-project summaries down in writing, consider interviewing them and taking notes. Another great option is to ask them to create short videos in which they describe their work on the project. Often people will be more candid and specific when talking to a camera than they are in a formal, written report. • Tell your project’s story: Sometimes it’s helpful to compile a project “biography” that documents a project’s backstory in a less formal way than a project audit. Often this is just an internal document, for the use of the project team only. The more frank you can be in such a document, the more valuable the project biography will be. Also, keep in mind that the most important information about a project is often shared among team members via stories. After all, human cultures have always used stories to express norms and pass on information. They can be a powerful means of exploring the true nature of a project, including the emotional connections between team members. As a project manager, remember to keep your ears open for oft-repeated stories about the projects you are working on, or about past projects. What you might be inclined to dismiss as mere office gossip could in fact offer vital insights into your organization, your project stakeholders, and your current projects. • Make your data visual: When writing an audit or closure report, it’s essential to present data in a way that makes it easy for your intended readers to grasp. This article from the Harvard Business Review offers helpful ideas for creating effective visualizations of project data: https://hbr.org/2016/06/visualizations-that-really-work. • Create a repository for audit reports and project summaries: Take the time to establish an organizational repository for storing audit reports and project summaries (whether in writing or video) made by team members. Periodically invite new and experienced project managers to review the repository as a way to promote organization-wide learning and professional development. Make sure this repository is accessible to the entire organization, and not stowed away in the personal files of an individual project manager. • Don’t rush to finalize project documentation on lessons learned: Sometimes the best time to reflect on a project and pinpoint what you learned is a few weeks or months after the conclusion of project execution. Taking a little time to let things settle will allow you to see the bigger picture and fully understand what went right and what went wrong. • Take the time to celebrate every project: There are a variety of ways to celebrate and recognize everyone’s accomplishments. Some examples include writing personalized thank you letters, writing a letter of reference for each of your team members, giving out awards that have special meaning and value to each person on the team, taking a team picture, creating a team song or a team video that recaps the project, endorsing each project member for specific skills on LinkedIn. You can probably think of many other ways to celebrate a completed project. The important thing is to do something. • Know when to say you’re done: Sometimes, as a project heads toward its conclusion, you have to ask “When is done done?” This can be an issue with some clients, who might continue to ask for attention long after your team’s responsibility has ended. An official project closure procedure can help forestall this kind of problem, by making it clear to all parties that the project is officially over. ~Summary • Many little things can go wrong in a project, but as long as you get the fundamentals right and keep them on target, a project will likely achieve substantial success. However, just because you have the big things right at the beginning of a project doesn’t mean they’ll stay that way. Throughout the life of a project, you need to stop, look, and listen, maintaining a certain level of paranoia about the health of the project and jumping in to alter course when necessary. • Even if you have implemented reliable monitoring systems designed to alert you to any serious problems, you will sometimes need to dive deeper into your project via a formal audit or informal review. The ultimate goal of any audit/review is to generate actionable intelligence in the form of an audit report that can be used to improve the project or, when necessary, justify shutting it down. • Deciding whether to correct course or shut a project down entirely is rarely easy, and is often governed more by fear than good decision-making practices. It’s important to start by seeking honest answers to questions about the project to determine its viability. You also need to keep in mind that a stakeholder’s perspective on a project will influence his or her evaluation of a project’s viability. • Project closure is traditionally considered the final phase of a project, but when seen from the broader, living order perspective, it often merely marks the conclusion of one stage and the transition to another stage of the project’s overall life cycle. If project closure is done thoughtfully and systematically, it can help ensure a smooth transition to the next stage of the project’s life cycle, or to subsequent related projects. ~Glossary • audit—A deep investigation into any or all aspects of a project, with the aim of enabling stakeholders to make fully informed decisions about the project’s future. An audit can provide a focused, objective review of part or all of a project. • audit report—A report created at the end of an audit that typically contains an explanation of the context of the audit, including the overall focus or any important issues; an analysis of data, interviews, and related research compiled during the audit; action-oriented recommendations; and, in some cases, lessons learned and possibly one or more supporting appendices. • change management—“Minimizing resistance to organizational changes through the involvement of key players and stakeholders” (Business Dictionary n.d.). • close-out meeting—An opportunity to end a project the way you started it—by getting the team together. During this important event, the team should review what went well, what didn’t go well, and identify areas for improvement. All of this should be summarized in the final close-out report. A final close-out meeting with the customer is also essential. This allows the organization to formally complete the project and lay the groundwork for potential future work. • close-out report—A final summary of project performance. It should include a summary of the project and deliverables; data on performance related to schedule, cost, and quality; a summary of the final product, service, or project and how it supports the organization’s business goals; risks encountered and how they were mitigated; and lessons learned. • genchi genbutsu—A key principle of the famously Lean Toyota Production System, which means “go and see for yourself.” In other words, if you really want to know what’s going on in a project, you need to actually go to where your team is working, and then watch and listen. • project audit/review—An inquiry into any or all aspects of a project, with the goal of learning specific information about the project. • project closure—According to most project management publications, the final phase of a project. However, in the broader, living order vision of a project’s life cycle, project closure often merely marks the end of one stage and the transition to another stage of the project’s overall life cycle—although exactly where your work falls in the project’s lifecycle depends on your perspective as to what constitutes “the project” in the first place. • recovery manager—Term used by Todd C. Williams in Rescue the Problem Project to refer to a consultant brought in from the outside to audit a failing project, and, if possible, get it back on the path to success (17-19). • strategic inflection point—As defined by Andrew Grove, CEO of Intel from 1997 to 2005, “a time in the life of a business when its fundamentals are about to change. That change can mean an opportunity to rise to new heights. But it may just as likely signal the beginning of the end” (1999, 3). A strategic inflection point in an individual project is a time in the life of a project when its fundamentals are about to change. • project auditor—The person responsible for leading an audit or review. Ideally, the project auditor is an outsider who is perceived by all project stakeholders to be fair and objective. He or she should have excellent listening skills and broad-base knowledge of the organization or industry.
textbooks/biz/Management/Book%3A_Technical_Project_Management_in_Living_and_Geometric_Order_(Russell_Pferdehirt_and_Nelson)/1.12%3A_Improving_Project_Performance_Through_Project_Reviews.txt
The first duty of a wise advocate is to convince his opponent that he understands their arguments. —Samuel Taylor Coleridge Objectives After reading this chapter, you will be able to • Explain the importance of negotiation in daily life and on the job • Describe the advantages of principled negotiation, as described by Roger Fisher and William Ury in their book Getting to Yes • Explain the role of emotions in negotiations and list some strategies for dealing with them • Discuss cross-cultural issues related to negotiation • Provide guidelines on evaluating the ethics of a negotiation • Define terms related to dispute resolution The Big Ideas in this Lesson • Whether you realize it or not, you engage in negotiations every day. Succeeding at any type of negotiation requires emotional intelligence, preparation, and a willingness to understand the needs of the party on the other side of the negotiating table. • A negotiation can generate positive emotions, especially if you are able to see the negotiation as a chance to build a relationship, strive to have empathy for the other party, and avoid taking personal offense over the natural give and take of a negotiation. • A living order negotiation ensures that the parties can continue to work together in the future, in an enduring relationship. By contrast, a living order approach to negotiation focuses on immediate results, with little regard to long-term relationships. • To get the most out of a negotiation, learn all you can about the other parties, use your emotional intelligence to perceive unspoken issues, seek a resolution that works for everyone as much as possible, and avoid an “us versus them” outcome. Most importantly, don’t ever assume you will get everything you want. 13.1 Negotiation 101 The need for negotiation, or settling differences, is a fact of human life. On a typical morning you might negotiate how much longer your teenager gets to sleep, who gets to use the shower first, and whose turn it is to drive for carpool. Once you get to work, you’ll probably encounter even more opportunities for negotiation, some of which could have high-stakes outcomes for your projects, your organization, and your career. Your ability to handle even small-scale negotiations (say, who’s responsible for changing the printer’s toner cartridge) can have a surprisingly large effect on your team’s sense of cohesion and purpose. Some negotiations are informal—the toner cartridge, for instance—while others, such as negotiating with a union, are highly formal, governed by a slew of state and federal laws. Negotiating Uncertainty According to Robert Merrill, Senior Business Analyst at the UW-Madison and a longtime project management veteran, one of the most important parts of a negotiation is getting all parties to accept the unknowns of living order: “If we don’t have solid predictability on combinations of scope, timing, and cost, and we’re negotiating commitments about them, isn’t that essentially gambling? We’re saying ‘I’ll bet you \$10 that my cards will be better than your cards!’ That’s only going to lead to frustrations and disputes later. How about coming to an agreement on where we’re each best able to accommodate uncertainty?” (pers. comm., June 19, 2018). You can tell a lot about a person’s negotiation skills from their definition of the term. Short-sighted, ineffective negotiators view a negotiation as a means of getting what they want. Wiser, more practiced negotiators would be more likely to define negotiation as a discussion with the goal of reaching an agreement that is moderately satisfying to both parties. A negotiation is not a competition. There should be no losers. Nobody gets everything they want in a successful negotiation, but everybody gets something. Perhaps most importantly, a wisely conducted negotiation ensures that the parties can continue to work together in the future. Since negotiation is such an integral part of human life, it is a well-studied art. There are many books on the topic, but they all come down to a few basic ideas: learn all you can about the other parties in the negotiation, use your emotional intelligence to perceive unspoken issues, seek a resolution that works for everyone as much as possible, and avoid an “us versus them” outcome. Most importantly, don’t ever assume that you will get everything you want in a negotiation. For people who tend to think in terms of absolutes, negotiating, which is all about compromise, can feel like foreign territory. The goal of any negotiation is finding a workable solution for all parties and not about one party beating the other. Project managers must negotiate constantly, and not just when working out contractual arrangements with suppliers or subcontractors. This list summarizes some project management situations that call for good negotiation skills. • Proposal: Developing a proposal for a project is a negotiated process. Iterations and adjustments are nearly always necessary, based on feedback and the response of the client to the proposal pitch. From the earliest project stages, stakeholders will be setting out their respective positions and negotiating necessary adjustments to accommodate each other’s objectives. • Scope: In defining scope, tensions between stakeholders regarding what can be delivered are unavoidable. The project scope should reflect a viable delivery plan for the endeavor. It should be realistic, and, if you have already completed similar projects successfully and fully understand what you need to do, it can also be ambitious. Project deliverables for scope, timing, and cost set the expectations for the rest of the project, so it is essential to conduct negotiations regarding these items in a positive way that ensures all stakeholders remain committed to the overall success of the project. • Dispute resolution: Inevitably, issues will arise during project execution that lead to disputes. These are often the result of poor communication or misunderstandings over the interpretation of deliverables. Resolving disputes is an exercise in negotiating corrective actions and in revising the remaining plan for the project. It is essential that these resolutions occur in a timely manner and to the satisfaction of all parties, to avoid the costs of delay and to avoid the issues becoming a distraction to the project’s primary objectives. • Acquiring Resources: Projects often unfold in cross-matrix organizations, where resources are being pulled in many directions as the organization strives to utilize them at maximum efficiency. As a project manager, you need to negotiate with resource managers regarding the availability of resources, including quantity, quality, and timing. • Priorities: Once resources are allocated to a project, the process of managing those resources involves continued negotiation on how and when work tasks are completed. From setting task priorities to ensure dependencies between activities are correctly executed, to working with team members on availability, the project manager is constantly adjusting the project plan to cope with the messy reality of projects in living order. • Procurement and contracts: The contracts stage of any project typically includes formalized negotiations. When government organizations are involved, these negotiations may be regulated by law. Increasingly, internal relationships between resource and service providers within an enterprise are covered by provision of service contracts, which need to be negotiated like any other contract. • Risk management: Every project is an exercise in risk management. The project manager is continually involved in negotiating risk trade-offs that might incur additional costs, delays, or changes to project scope. In some cases, the project manager might have to negotiate the transfer of risk between stakeholders. It is essential that these risk negotiations are transparent and consciously accepted by all affected parties. • Closure: The formal sign-off on project delivery occurs at closeout. This process ensures that contractual deliverables have been formally acknowledged as being complete to the satisfaction of the client and all key stakeholders. At this point, stakeholders review the original scope and any agreed-upon deviations negotiated along the way, to verify that the completed project matches what everyone thought they agreed to. If the client is not satisfied, the termination stage may entail negotiating adjustments to the project and acceptance of final delivery status. To be an effective negotiator, a project manager must be empowered with the necessary authority. It’s important to be able to make on-the-spot decisions with an understanding of the consequences, which should be worked out beforehand through scenario planning. All parties will be frustrated if a negotiator lacks the authority to negotiate and makes commitments that cannot be honored, or if the negotiator continually needs to seek approvals. Taking the Middle Path In his book The New Negotiating Edge: The Behavioral Approach for Results and Relationships, Gavin Kennedy advocates a middle path between hard-nosed, aggressive tactics (which he calls red behavior) and a completely rational, win-win style that seeks to satisfy all parties (blue behavior). This middle path—purple behavior—focuses on the two-way exchange necessary to successfully conclude any negotiation. Everyone has to give up something to get something. In a review of Kennedy’s book, Roger Trapp explains: Kennedy saw the need for a different approach because the red style, by assuming that negotiation is all about manipulation, tends to harden attitudes, while the blue one is over-trusting of other people. “The key to solving dilemmas of trust and risk,” he writes, “is not to alternate between non-trusting red and too-trusting blue, but to fuse them into purple conditional behavior. “This fusion neatly expresses the essence of the negotiation exchange: give me some of what I want (my red results side) and I will give you some of what you want (my blue relationship side). Red is taking behavior, blue is giving behavior and purple is trading behavior, taking while giving.” The strength of purple behavior, he argues, is that it is a two-way exchange rather than a one-way street and moreover deals with people as they are and not how you assume or want them to be. (1998) 13.2 Focus on Interests Instead of Positions Resist the Lure of a Midpoint A preferred method is to be open about what both parties value, and negotiate trade-offs in which each party gives up what they value least. It’s also helpful to be realistic from the start about acceptable figures rather than engaging in positional bargaining, in which each party is forced to give way, bit by bit, over a long period of time. In their seminal 1981 book, Getting to Yes, Roger Fisher and William Ury describe the most common form of negotiation as the kind of haggling you might engage in when buying a used car. You might start by taking up a position at the low end—say, \$4,000. Meanwhile, the car dealer takes up a position on the high end—say, \$12,000. Then the two of you proceed to argue the invalidity of the other’s position (“\$4,000 isn’t a serious offer!”), while altering your positions bit by bit, until finally you settle on an acceptable mid-point of \$8,000. According to Fisher and Ury, this kind of negotiation, known as positional bargaining, forces people to take up positions and defend them: In positional bargaining you try to improve the chance that any settlement reached is favorable to you by starting with an extreme position, by stubbornly holding to it, by deceiving the other party as to your true views, and by making small concessions only as necessary to keep the negotiation going. The same is true for the other side. Each of those factors tends to interfere with reaching a settlement promptly. The more extreme the opening positions and the smaller the concessions, the more time and effort it will take to discover whether or not agreement is possible. (2011) The problem with this type of negotiation is that it creates a contest of wills that can permanently damage relationships: Each negotiator asserts what he will and won’t do. The task of jointly devising an acceptable solution tends to become a battle. Each side tries through sheer willpower to force the other to change its position…. Anger and resentment often result as one side sees itself bending to the rigid will of the other while its own legitimate concerns go unaddressed. Positional bargaining thus strains and sometimes shatters the relationship between the parties. Commercial enterprises that have been doing business together for years may part company. Neighbors may stop speaking to each other. Bitter feelings generated by one such encounter may last a lifetime. (Fisher and Ury) In as much as time is money, positional bargaining is also expensive because it increases the number of decisions a negotiator has to make, such as “what to offer, what to reject, and how much of a concession to make.” The difficulty involved in making so many decisions makes it easier for parties to delay making any decision at all: Decision-making is difficult and time-consuming at best. Where each decision not only involves yielding to the other side but will likely produce pressure to yield further, a negotiator has little incentive to move quickly. Dragging one’s feet, threatening to walk out, stone walling, and other such tactics become commonplace. They all increase the time and costs of reach an agreement as well as the risk that no agreement will be reached at all. (Fisher and Ury) Effective negotiators avoid positional bargaining at all costs. Rather than setting up a “me versus you” situation, negotiators should try what Fisher and Ury call principled negotiation, which is based on four essential points: • People: Separate the people from the problem. • Interests: Focus on interests, not positions. • Options: Generate a variety of possibilities before deciding what to do. • Criteria: Insist that the result be based on some objective standard. (Fisher and Ury) The first point, separating the people from the problem, focuses on removing emotion from the negotiating process. The second point focuses on the fact that nothing revs up emotion like taking and defending a position. By abandoning positions and focusing on interests instead, the parties involved in the negotiation will begin to see themselves as collaborators, trying to solve a problem together. This in turn makes it easier to brainstorm a list of possibilities, which you can then evaluate based on objective standards agreed to by all parties. It’s also helpful to think in terms of what negotiating parties value. If the other party values something highly and you don’t, that is the perfect thing to trade for something else that is valuable to you and not so much to the other side. This constructive approach helps prevent the kind of roadblocks that arise when you assume that giving anything away, even if you didn’t value it, is a loss. Being open about what you value at the start of a negotiation can save a lot of time, helping you achieve a meaningful trade more quickly. Some might argue that you don’t want to “show your cards” too soon, but in a principled negotiation, in which all parties are focused on achieving the best possible outcome instead of simply beating the other party, putting all our cards on the table works very well. 13.3 Information-Based Bargaining Principled negotiation, as described by Fisher and Ury, is in part an exercise in learning about your negotiating partner. But you also have to be clear about your own motivations and your personal bargaining style. The more you know about yourself and your negotiating partner, the more options you have as the bargaining unfolds. In Bargaining for Advantage: Negotiation Strategies for Reasonable People, G. Richard Shell argues against the existence of any one, all-purpose technique for closing a deal: Experienced negotiators know that there are too many situational and personal variables for a single strategy to work in all cases. To become more effective, you need to get beyond simple negotiation ideas…. You need to confront your anxieties, accept the fact that no two negotiators and situations are the same, and learn to adapt to these differences realistically and intelligently—while maintaining your ethics and self-respect…. Many people are naturally accommodating and cooperative; others are basically competitive; some are equally effective using either approach. But there is only one truth about a successful bargaining style: To be good, you must learn to be yourself at the bargaining table. Tricks and stratagems that don’t feel comfortable won’t work. Besides, while you are worrying about your next tactic, the other party is giving away vital clues and information that you are missing. To negotiate well, you do not need to be tricky. But it helps to be alert and prudent. The best negotiators play it straight, ask a lot of questions, listen carefully, and concentrate on what they and the other party are trying to accomplish at the bargaining table. (2006, xvii-xviii) Negotiate Like an FBI Agent In the high-stakes negotiations conducted by FBI hostage negotiators, techniques that demonstrate an understanding of the emotional needs of hostage-takers can be key to a successful resolution. This article by long-time FBI agent Chris Voss includes some tips that can also be helpful in the more mundane negotiations of the business world: http://time.com/4326364/negotiation-tactics/. Voss recommends tactics like these: • Repeat words back to the people you are negotiating with, so they feel that you are listening and have a rapport with them. • Show empathy by saying things like “It sounds like you are concerned about…” • Create a way for the person to say “no,” which makes a person feel safe, rather than “yes,” which can make someone feel cornered. For example questions, like: “Is now a bad time to talk?” and “Have you given up on this project?” • Talk in a way that will encourage people to say “Yes, that’s right,” which shows they see that you understand them. (2016) Once they have clarified their own biases in the negotiation process, effective negotiators turn their focus to learning about their negotiating partners and adapting in response to what they’ve learned. In other words, effective negotiators work in living order, staying flexible and keeping their eyes open to new information that might change their approach in the negotiation room. Shell’s approach to bargaining, which he calls information-based bargaining, capitalizes on this living order understanding of the changeable nature of human interactions. His approach focuses on three main aspects of negotiation: Solid planning and preparation before you start, careful listening so you can find out what the other side really wants, and attending to the “signals” the other party sends through his or her conduct once bargaining gets under way. As the name suggests, Information-Based Bargaining involves getting as much reliable knowledge about the situation and other party as possible…. Information-Based Bargaining is a “skeptical school” of negotiation. It treats each situation and person you face as unique. It cautions against making overly confident assumptions about what others want or what might be motivating them. And it emphasizes “situational strategies” tailored to the facts of each case rather than a single, one-size-fits-all formula. (2006, xviii-xix) Information-based bargaining is useful in all kinds of negotiations, helping to calm disputants in even the most contentious situations. The first phase of the process is careful research into the concerns of all parties. Robert L. Zorn explains: “The research becomes the underpinning of information-based bargaining.” For example, in a schoolboard/union negotiation, it’s helpful to start by collecting reliable information on salaries. According to Zorn: Research should show the historical trends of salary increases as well as how those salaries rank on a comparative basis to similar school districts. This information can be compiled by percentages or dollars, by salaries paid for specific positions or by salaries as a percentage of the budget over the years and by comparability with salaries in other school districts with like fiscal resources and similar demographics…. The idea is to put together enough information that most individuals looking at the information will come to the same or a similar conclusion as to where salaries should or could go in the new agreement. Thus the bargaining is driven by information rather than what one side or the other side wants without regard to what the information shows. This style of bargaining is predicated on the assumption that educated persons looking at the same information will come to the same or similar conclusions. Obviously, this doesn’t happen every time. In cases where it doesn’t, and matters must go to mediation, all the information compiled is extremely helpful in presenting one’s case to the mediator. The mediator also will use this information to try to get the parties to say yes to an item based on factual data rather than emotion or what one side wants. (n.d.) 13.4 Embrace the Power of Emotion For many people, the prospect of a negotiation can generate a wave of anxiety and other negative emotions. So, it’s good to keep in mind that working out a deal with another person can also generate positive emotions, especially if you are able to • See the negotiation as a chance to build a relationship • Strive to have empathy for the party on the other side of the table • Avoid taking personal offense over the natural give and take of a negotiation Indeed, many experienced engineers find that well-conducted negotiations result in deep, trusting relationships that last throughout their careers. Still, many people struggle with negative emotions—fear, anger, suspicion, jealousy, regret, resentment, and contempt—when involved in negotiations. General Motors took this fact into consideration when it launched its Saturn division in the 1990’s, opening dealerships committed to a strict “no-haggle” policy. That, plus “the absence of a high-pressure sales environment and the high level of customer satisfaction contributed to a sense of brand loyalty among Saturn’s customers” (Wharton School 2009). For the first time in the United States, you could walk into a dealership and buy a car without having to negotiate. For some, that felt like a huge relief. Of course, negotiation avoidance is not a realistic option in all facets of life. But it is possible to minimize negative emotions by preparing carefully for any negotiation, and by focusing on positive emotions instead. In their book, Beyond Reason: Using Emotions as You Negotiate, Roger Fisher and Daniel Shapiro present a strategy for using positive emotions as a negotiating tool. They argue that it is impossible to evaluate and respond to every single emotion that arises among the various parties in a negotiation. Instead, they recommend focusing on the core concerns that psychologists tell us generate emotions in most people. According to Fisher and Shapiro, core concerns are “human wants that are important to almost everyone in virtually every negotiation. They are often unspoken but are no less real than our tangible interests. Even experienced negotiators are often unaware of the many ways in which these concerns motivate their decisions” (2005, 14). Fisher and Shapiro focus on the following five core concerns: • Appreciation—The desire to feel recognized and respected • Affiliation—The desire to belong and have social intimacy with others • Autonomy— The desire to make your own decisions • Status— The desire to maintain a sense of importance relative to others that is appropriate and recognized • Role—The desire to play a fulfilling and important part in a situation They describe the five core concerns in Table 13-1. Table 13-1: Five core concerns that affect everyone in a negotiation (Source: Beyond Reason: Using Emotions as You Negotiate, by Roger Fisher and Daniel Shapiro, Table 3, p. 17.) Gender and Negotiation At a Wharton School conference on women in business, a group of seasoned female business professionals discussed gender differences in negotiation strategies and effectiveness. They agreed that because women tend to be better listeners than men, they have a pronounced advantage in many negotiations. However, because they tend to underplay their own value in a situation, they often fail to negotiate successfully on their own behalf. You can read a summary of the conference discussion here: http://knowledge.wharton.upenn.edu/article/women-and-negotiation-are-there-really-gender-differences/. This article discusses differences in the way men and women negotiate, with suggestions on how each gender can learn from the other: http://work.chron.com/can-gender-affect-negotiation-5771.html. Emotional Intelligence The higher your level of emotional intelligence, the more success you’ll have at managing emotions during a negotiation. Take a moment to reread the section on emotional intelligence (the ability to recognize your own feelings and the feelings of others) in Lesson 5. Fisher and Shapiro’s book includes a chapter on each core concern, with plentiful advice on how to use them to stimulate positive emotions such as enthusiasm, happiness, and hopefulness, which in turn can make people more prone to cooperate, more creative, and more inclined to trust each other. In most cases, focusing on core concerns will keep the conversation moving toward a successful resolution. However, you do have to be prepared for the power of negative emotions which, according to Fisher and Shapiro, can have the following ill effects: • tunnel vision: An inability to take in the entire situation, in which the “focus of your attention narrows and all you are aware of are your strong emotions” (147). • behavior controlled entirely by emotion: “As your emotions escalate, you risk acting in ways you will regret…. Strong emotions inform us that a concern is probably not being met, and they rattle us to try to satisfy that concern now”(147-155). • an equally angry negotiating partner: “Your anger can stimulate the other person’s anger, just as their anger can easily be ‘caught’ by you. Strong negative emotions are like a snowball rolling down a hill. They get bigger as they roll along”(147). The secret to managing negative emotions is, first and foremost, being aware of them. Fisher and Shapiro recommend taking your emotional temperature throughout a negotiation “to catch your emotions before they overwhelm your ability to act wisely” (147). They offer a number of suggestions for calming yourself, including breathing deeply, temporarily changing the subject, or taking a quick break that allows you to leave the room. After a negotiation is over, try to take time to evaluate how your core concerns, and the core concerns of your negotiating partner, stimulated negative emotions in the first place. 13.5 When Worlds Collide Cross-cultural issues can add complexity to any negotiation. For example, people from different cultures might have different conversation styles or conflicting ideas on the importance of punctuality. They might even approach a negotiation with totally different understandings of the overall purpose of a negotiation in the first place. “For deal makers from some cultures, the goal of a business negotiation, first and foremost, is a signed contract between the parties. Other cultures tend to consider that the goal of a negotiation is not a signed contract but rather the creation of a relationship between the two sides” (Salacuse 2004). In many cultures, saving face—or, avoiding humiliation—is an essential concern in any negotiation. In that case, it may be necessary to negotiate a compromise in which one party appears to have agreed to important concessions. In some cultures, there is also a question of status. Parties will only accept negotiating with someone of perceived equal status to themselves. This web page offers some helpful suggestions for face saving in Asian cultures: https://www.tripsavvy.com/saving-face-and-losing-face-1458303. This helpful article explains ten ways that culture can affect a negotiation: http://iveybusinessjournal.com/publication/negotiating-the-top-ten-ways-that-culture-can-affect-your-negotiation/. This article from the Harvard Business Review provides five rules of thumb for negotiating with someone from a different culture: https://hbr.org/2015/12/getting-to-si-ja-oui-hai-and-da. Thoughts from an Experienced Negotiator Brian Price is a graduate of the University of Wisconsin Master of Engineering in Professional Practice program (a precursor of the MEM program), the former chief power train engineer for Harley-Davidson, and an adjunct professor in the UW Master of Engineering in Engine Systems program. In a conversation with the authors, he shared this example of the kind of misunderstandings that can arise when negotiating across cultures: My experience negotiating in Korea was enlightening. It is impolite to say “no” in Korea. It is considered very blunt and rude. During negotiations, I might say something like “Can we agree to a delay in delivery by two weeks?” The Korean negotiators would pause and then say “mmm…yes.” I thought we had just negotiated a delay, but we hadn’t. What the Korean negotiators meant was “I hear what you say. I’m not going to say no and be rude, but I don’t agree with your proposal.” At the time, I did not pick up on the subtle cues of the pause and the meaning of a single “yes.” If they did agree to something, it was acknowledged by a double yes, said clearly and without hesitation—“Yes, yes.” It took me several misunderstandings over several months to work this out. Such cultural confusion can be compounded by a linguistic quirk in which Japanese and Koreans answer negative questions with a positive answer, as described in this web page: http://en.rocketnews24.com/2016/01/23/when-yes-means-no-the-japanese-language-quirk-that-trips-every-english-speaker-up/. 13.6 Ethics and Negotiation As Stan Lee, the creator of Spider Man, so memorably said, “With great power comes great responsibility.” As you hone your negotiating skills, you take on the moral burden of ensuring that you don’t use your skills to force someone into a bad situation. You also need to factor in the greater good—that is, issues that lie beyond your immediate interests or the interests of your organizations—and think about what’s best for society as a whole. To help evaluate the ethics of any situation, the Harvard Law School’s Program on Negotiation suggests asking yourself five questions: • Negotiation Principle 1. Reciprocity: Would I want others to treat me or someone close to me this way? • Negotiation Principle 2. Publicity: Would I be comfortable if my actions were fully and fairly described in the newspaper? • Negotiation Principle 3. Trusted friend: Would I be comfortable telling my best friend, spouse, or children what I am doing? • Negotiation Principle 4. Universality: Would I advise anyone else in my situation to act this way? • Negotiation Principle 5. Legacy: Does this action reflect how I want to be known and remembered? (Wheeler 2017) If you can answer yes to all five questions, then you can probably assume that you are conducting an ethical and honorable negotiation. But keep in mind that, in some situations, the ethical solution may not actually be legal. For example, lawyers may perceive the moral superiority of their opponents’ position, but be legally bound to act only in the interests of their clients. One particular challenge is the differing laws and ethical standards around the world. For international companies this can be a tricky area. Ultimately, you need to follow your own moral compass. You should always stay within the law, but also ensure that your personal ethical standards are not being compromised through a negotiation process. If you find yourself in a situation in which your ethical and legal obligations are murky, seek out advice from a more experienced professional in your field. You should also consult the Code of Ethics for Engineers, published by the National Society of Professional Engineers, which is available here: https://www.nspe.org/sites/default/files/resources/pdfs/Ethics/CodeofEthics/Code-2007-July.pdf. For a more in-depth discussion of ethics and bargaining, check out Chapter 11 of G. Richard Shell’s book Bargaining for Advantage. Among many helpful ideas, he includes some suggestions on what to do when you face unethical tactics from your negotiating partners. Negotiating in Good and Bad Faith Robert Merrill, Senior Business Analyst at the UW-Madison and a seasoned project manager, has spent a lot of time reading and thinking about negotiation tactics. In his project management work this was essential, he says, because “a lot of projects live or die on how they handle conflict, which means negotiation is really the art of handling disagreements” (pers. comm., June 19, 2018). One essential part of negotiating is remembering that most people negotiate in good faith, but some people routinely negotiate in bad faith. This is especially true of people with dark triad personalities—that is personalities marked by narcissism, a lack of empathy (psychopathy), and Machiavellianism, or a desire to manipulate others (Whitbourne 2013). Here’s what Merrill has to say on the topic: Most of the time, negotiation is not a competition, but sometimes, but when you are dealing with a dark-triad personality it absolutely is. Such people are attracted to power, and they tend to climb organizational ladders at least for a while, because they “get things done” and have a way of offloading their failures and avoiding the collateral damage. On the other hand, just because a small percentage of the population is a psychopath, doesn’t mean everyone is. So don’t react to each aggressive negotiation request with weaponized facts, treating the negotiation like a form of combat. In other words, assume the people across the table are negotiating in good faith. But when you do verify that you’re sitting across from someone who doesn’t care if you make a promise your team can’t keep, which will burn them up and damage your reputation in the process, you have to behave quite differently. Verify the support of your allies. Marshal your facts. Draw on your deepest well of unconditional positive regard—the other person is a human soul, too, and you have no idea how they got to where they are. Then crank your boundaries and empathy up to 10 and wade in. To prepare for this kind of situation, I suggest reading Never Split the Difference, by Chris Voss. (pers. comm., June 19, 2018) 13.7 Resolving Disputes As a project manager, you will often have to marshal your negotiation skills in order to resolve disputes among stakeholders. Most disputes are small affairs that people can work out amongst themselves. On the other end of the spectrum are complicated and highly charged legal disputes that require the work of lawyers specially trained in dispute resolution law. Hopefully your experience will be limited to the former, but as a project manager you should at least be familiar with the following terms: • dispute resolution: A “process for resolving differences between two or more parties or groups” (Business Dictionary). • arbitration: A dispute resolution method in which the disputing parties agree to let a neutral third party make a final decision. This article explains the many issues involved in arbitration: http://www.mediate.com/articles/grant.cfm. • consensus building: A “conflict-resolution process used mainly to settle complex, multiparty disputes” (Burgess and Spangler 2003). • mediation: A dispute resolution process in which a neutral third party helps “disputants come to consensus on their own” (Program on Negotiation: Harvard Law School 2018). You can avoid disputes in the first place by doing the following: • Make sure all contracts, plans, proposals, and other documents are clearly written and easy to understand. • Make sure your decision-making processes are as transparent as possible. For example, in construction, it’s helpful to have a clear process for change orders, so there’s no uncertainty about why a team member spent so much money or why they thought they had the authority to do so in the first place. Getting Everyone to Agree Consensus building, which is widely used to solve complicated environmental and public policy disputes, is “useful whenever multiple parties are involved in a complex dispute or conflict. The process allows various stakeholders (parties with an interest in the problem or issue) to work together to develop a mutually acceptable solution” (Burgess and Spangler 2003). Consensus-building emphasizes working toward a solution that everyone can live with. It is typically time-consuming and doesn’t work for every type of problem, but it can result in satisfying long-term solutions to seemingly intractable problems. Consensus building is especially effective when • The problem is not well-defined, or the disputants disagree on the definition • Disputants have widely varying interests and yet are interconnected in some important way. This is often the case in disputes involving natural resources • Previous attempts to solve the problem, perhaps by imposing a solution, have proving fruitless For more on consensus building, see the following: ~Practical Tips • Stay focused: For each negotiation, have clear, specific objectives in mind that keep participants focused on project success. • Look for ways to turn a competitive negotiation into a shared pursuit of project goals: Focus on options that create a clear, common goal with shared consequences and motivation to work collaboratively. For example, you could set up a shared incentive fund for on-time, on-budget project completion. • Be the negotiating partner you want to have: Remember that each negotiation is an interaction with a partner with whom you need have a constructive on-going relationship. Putting a project partner in an impossible bind may put the success of the project in jeopardy. • Use mindfulness exercises to manage negotiation-related anxiety: It’s entirely normal to feel anxious while negotiating. Simply admitting to yourself that you do feel uneasy can go a long way toward lessening the effects of your anxiety. This article describes a few classic calming techniques: https://www.everyday-mindfulness.org/3-quick-mindfulness-practices-to-overcome-worry-anxiety-and-panic/. • Be sincere and show respect: Many studies underscore the importance of honesty and sincerity in any negotiation. Before and during the negotiation, seek to understand and show respect for the other party’s interests. • Make sure you know what you want: Before you walk into a negotiation, clarify what is important to you and why it is important. • Understand the alternatives you would be willing to accept: Instead of thinking in terms of a bottom line, or a “walk away”—that is, the issue that will force you to walk away from the negotiation—think in terms of a best alternative to a negotiated agreement, or BATNA, as explained here: http://www.negotiationtraining.com.au/articles/next-best-option/. Having a clearly defined BATNA helps you understand your options, should your negotiation fail. • Use your negotiation time wisely: Show respect to the other parties in the negotiation by valuing their time. Make it clear that the goal of the process is to come to an agreement and not to continue negotiating endlessly. • Don’t let multiple options decrease your effectiveness as a negotiator: You might think that having multiple offers on the negotiation table gives you more leverage, but research suggests otherwise. Why? “In some cases, having several low offers caused people to underestimate the value of what they were selling…inhibiting their ability to hold out for a better deal.” By contrast, “having a single strong offer on the table rather than many undesirable offers can instill feelings of power and confidence and allow for bolder negotiating strategies” (Harvard Business Review 2017). • Take action to break an impasse: If you find a negotiation grinding to a halt, try some options for getting unstuck, as described here: https://oluchinwaiwu.wordpress.com/2009/10/01/five-ways-of-resolving-an-apparent-deadlock-in-a-negotiation/ and here: https://www.cedr.com/solve/advice/?p=9. • Don’t be afraid to say nothing: Silence is an amazingly effective negotiation technique. It forces the other party to fill up the empty conversational space, often by making unexpected concessions. Such hard-ball tactics are not usually desirable because they can cause irreparable damage to relationships between the negotiation parties. But depending on the situation and the gravity of the negotiation, sometimes they are necessary. • Walk a mile in your negotiating partner’s shoes: You’ll always get better results in a negotiation if you can make the effort to understand everyone’s point of view. The easiest way to do this is simply talking to your opposite number in the negotiation about what he or she hopes to achieve. In high-stakes negotiations involving lots of people, consultants will sometimes ask participants to spend a day role-playing—acting out the part of the people across the table from them. This forces all participants to internalize perspectives other than their own. • Think about what you’ve learned: Reflect on every negotiation experience and use what you learn in future negotiations. ~Summary • The need for negotiation, or settling differences, is a fact of human life. Negotiation is not a competition. There should be no losers. Nobody gets everything they want in a successful negotiation, but everybody gets something. Perhaps most importantly, a wisely conducted negotiation ensures that the parties can continue to work together in the future. • In their seminal 1981 book, Getting to Yes, Roger Fisher and William Ury recommend focusing on interests in a negotiation, instead of staking out positions that you then have to defend. Rather than setting up a “me versus you” situation, Fisher and Ury advocate a method called principled negotiation. • The more you know about yourself and your negotiating partner, the more options you have as the bargaining unfolds. In Bargaining for Advantage: Negotiation Strategies for Reasonable People, G. Richard Shell recommends an approach he calls information-based bargaining, which involves careful preparation and listening, and understanding that every negotiation is unique. • Well-conducted negotiations can result in long-lasting, trusting relationships that can sustain your career. To avoid negative emotions, prepare for each negotiation carefully, and try to focus on positive emotions. In their book, Beyond Reason: Using Emotions as You Negotiate, Roger Fisher and Daniel Shapiro recommend focusing on the core concerns that psychologists tell us generate emotions in most people. • Cross-cultural issues can add complexity to any negotiation. For example, in many cultures, saving face—or, avoiding humiliation—is an essential concern in any negotiation. In that case, it may be necessary to negotiate a compromise in which the opposing party appears to have agreed to important concessions. • As you hone your negotiating skills, you take on the moral burden of ensuring that you don’t use your skills to force someone into a bad situation. You also need to factor in the greater good—that is, issues that lie beyond your immediate interests or the interests of your organizations—and think about what’s best for society as a whole. • As a project manager, you will often have to marshal your negotiation skills to resolve disputes among stakeholders. Tools for resolving disputes include arbitration, consensus building, dispute resolution, and mediation. ~Glossary • arbitration—A dispute-resolution method in which the disputing parties agree to let a neutral third party make a final decision. • consensus building: A “conflict-resolution process used mainly to settle complex, multiparty disputes” (Burgess and Spangler 2003). • core concerns—According to Roger Fisher and Daniel Shapiro, “human wants that are important to almost everyone in virtually every negotiation. They are often unspoken but are no less real than our tangible interests” (2005, 14). Fisher and Shapiro focus on the following five core concerns: appreciation, affiliation, autonomy, status, and role. • dispute resolution—A “process for resolving differences between two or more parties or groups” (Business Dictionary n.d.). • informationbased bargaining—An effective type of negotiation described by G. Richard Shell in his book Bargaining for Advantage, which focuses on “three main aspects of negotiation: solid planning and preparation before you start, careful listening so you can find out what the other side really wants, and attending to the ‘signals’ the other party sends through his or her conduct once bargaining gets under way” (Shell 2006, xviii-xix). • mediation—A dispute resolution process in which a neutral third party helps “disputants come to consensus on their own” (Program on Negotiation: Harvard Law School 2018). • negotiation—A discussion with the goal of reaching an agreement that is moderately satisfying to both parties. Nobody gets everything they want in a successful negotiation, but everybody gets something. Perhaps most importantly, a wisely conducted negotiation ensures that the parties can continue to work together in the future. • positional bargaining—An inefficient form of negotiation in which opposing parties take up positions and defend them, making only small concessions when forced to do so.
textbooks/biz/Management/Book%3A_Technical_Project_Management_in_Living_and_Geometric_Order_(Russell_Pferdehirt_and_Nelson)/1.13%3A_Critical_Project_Management_Skill-_Negotiation.txt
An organization’s ability to learn, and translate that learning into action rapidly, is the ultimate competitive advantage. (Slater 1998, 12) —Jack Welch, CEO of General Electric, 1981-2001 Objectives After reading this chapter, you will be able to • Discuss the role of learning in personal and organizational transformation • Explain issues related to project management maturity models • Distinguish between thin and thick sustainability • List ways to facilitate personal project management maturity • List ways to facilitate organizational project management maturity The Big Ideas in this Lesson • All organizational and personal change starts with learning. The kind of evolution associated with living order project management is a natural result of taking in new ideas and information. Don’t persevere in a particular approach or methodology simply because it’s the one you know. • A focus on project management maturity, and the organizational learning that goes along with it, are essential components of any continuous improvement effort. • An important element of your personal project management maturity is figuring out where you and your organization stand on questions of sustainability. • You need to commit to your own personal development. 14.1 Developing Yourself and Your Organization The word “development” is widely used in business to refer to a process of transformation. In “product development” it refers to the transformation of an idea into a new product. In “real estate development” it refers to the transformation of a piece of property into something of greater value by constructing buildings, creating roads, and so on. Personal and organizational development are also processes of transformation—change that makes a person a more effective project manager, change that makes an organization a more successful company. Throughout these lessons, you have read about the many ways that dynamic, ever-changing living order affects the work of project managers. Now we’ll consider how these agile, resourceful thinkers embrace the change required to advance their own personal development, as well as the development of their organizations. We’ll also focus on the concept of project management maturity and the maturity models used to measure it. Then we will explore the many ways that learning can make you a better project manager, and the cultural and organizational barriers to effective learning. Learning and Mindfulness Researchers have discovered a lot about the effects of mindfulness, a state of nonjudgemental awareness, on an individual’s ability to learn. Here are a few topics that you might want to explore on your own: 14.2 The ABCs of Learning All organizational and personal change starts with learning. But what is learning in the first place? It’s not just acquiring information. According to Daniel H. Kim, it is a process of accumulating both know-how and know-why: Learning encompasses two meanings: (1) the acquisition of skill or know-how, which implies the physical ability to produce some action, and (2) the acquisition of know-why, which implies the ability to articulate a conceptual understanding of an experience…. For example, a carpenter who has mastered the skills of woodworking without understanding the concept of building coherent structures like tables and houses can’t utilize those skills effectively. Similarly, a carpenter who possesses vast knowledge about architecture and design but who has no complementary skills to produce designs can’t put that know-why to effective use. Learning can thus be defined as increasing one’s capacity to take effective action. (Kim 1993) Robert Merrill, Senior Business Analyst and a project manager veteran, also recommends making time for know-when learning—that is, learning when specific tools and tactics are useful. For instance, a coach knows how to motivate athletes in a number of ways, but a great coach knows when to use each type of motivation. And keep in mind that part of learning is practicing. You never fully learn how to do something until you actually do it (pers. comm., July 2, 2018). Take a moment to think about that: learning is “increasing one’s capacity to take effective action.” That may not be true of all learning—you might want to learn about Roman history, or metalworking simply because it gives you pleasure and deepens your understanding of life in general, not because either pursuit will prepare you for action. But as you plot your professional development, you would be wise to remember that time devoted to learning is a limited resource. So learning that increases your capacity for effective on-the-job action, and that positions you for future assignments with increased responsibility, is your best investment. According to Morgan W. McCall, Jr., who has written extensively on personal development, that kind of learning is usually the result of hands-on experience. He argues that leaders are made, not born, through the trial and error learning that occurs through actual work: “adversity, challenge, frustration, and struggle lead to change” (1998, xiv). However, despite mountains of research showing that experience is the best teacher, organizations often sabotage their employees’ ability to learn from failure: The paradox of wanting people to learn from experience, which by definition involves trial and error, yet punishing them when trial resulted in error, highlights a fundamental dilemma for development. That is, for learning to occur, the context must support learning…. At the most basic level, development is directly affected by the organization’s business strategy (what it is trying to achieve) and by its values (what it is willing to do to get there). These organizational issues determine what is desired, what is rewarded, and what is tolerated. (Morgan W. McCall, 58) As a project manager, you probably can’t control whether your organization’s business strategy supports and values experiential learning, but you can strive to cultivate non-judgmental project teams that allow for learning from experience. 14.3 Project Management Maturity The changing nature of living order ensures that organizations that continue to do what they’ve always done will, sooner or later, find themselves unable to compete in the modern market place. Those that succeed often embrace some form of continuous improvement, a key practice of Lean project management in which organizations focus on improving “an entire value stream or an individual process to create more value with less waste” (Lean Enterprise Institute 2014). Or to put it more simply, they strive to create “a culture of continuous improvement where all employees are actively engaged in improving the company” (Vorne). The exact form continuous improvement takes in an organization varies depending on the industry, the current state of the market, and so on. But for project-centered organizations, a focus on project management maturity, and the organizational learning that goes along with it, are essential components of any continuous improvement effort. Indeed, as David A. Garvin explains, continuous improvement is impossible without learning: How, after all, can an organization improve without first learning something new? Solving a problem, introducing a product, and reengineering a process all require seeing the world in a new light and acting accordingly. In the absence of learning, companies—and individuals—simply repeat old practices. Change remains cosmetic, and improvements are either fortuitous or short-lived. (1993) The term project management maturity refers to the “progressive development of an enterprise-wide project management approach, methodology, strategy, and decision-making process. The appropriate level of maturity will vary for each organization based on its specific goals, strategies, resource capabilities, scope, and needs” (PMSolutions 2012). Before you can assess an organization’s overall project management maturity, it’s helpful to have an objective standard of comparison to help you understand the context in which you are operating. In other words, you need a project maturity model, also known as a capability maturity model. A maturity model is a set of developmental stages that can be used to evaluate an organization’s state of maturity in a particular domain. More specifically, according to Becker, Knackstedt, and Poppelbuss, a maturity model represents an anticipated, desired, or typical evolution path of these objects shaped as discrete stages. Typically, these objects are organizations or processes. The bottom stage stands for an initial state that can be, for instance, characterized by an organization having little capabilities in the domain under consideration. In contrast, the highest stage represents a conception of total maturity. Advancing on the evolution path between the two extremes involves a continuous progression regarding the organization’s capabilities or process performance. (2009) Among other things, a maturity model offers • The benefit of a community’s prior experiences • A common language and a shared vision • A framework for prioritizing actions • A way to define what improvement means for your organization (Select Business Solutions n.d.) The first widely used maturity model, the Capability Maturity Model (CMM), was developed in the software industry in the late 1980’s by the Software Engineering Institute (SEI) at Carnegie Mellon University, working in conjunction with the United States Department of Defense. Mary Rouse describes the five levels of CMM maturity as follows: • At the initial level, processes are disorganized, even chaotic. Success is likely to depend on individual efforts, and is not considered to be repeatable, because processes would not be sufficiently defined and documented to allow them to be replicated. • At the repeatable level, basic project management techniques are established, and successes could be repeated, because the requisite processes would have been established, defined, and documented. • At the defined level, an organization has developed its own standard software process through greater attention to documentation, standardization, and integration. • At the managed level, an organization monitors and controls its own processes through data collection and analysis. • At the optimizing level, processes are constantly being improved through monitoring feedback from current processes and introducing innovative processes to better serve the organization’s particular needs. (Rouse 2007) Since the development of the CMM, over a hundred maturity models have been developed for the IT industry alone (Becker, Knackstedt and Poppelbuss 2009). Meanwhile, other industries have developed their own models, each designed to articulate the essential stages of maturity for a particular type of organization. Developing and implementing proprietary maturity models, and assessment tools to determine where an organization falls on the maturity spectrum, is a specialty of countless business consulting firms. Around the world, the most widely recognized maturity model is the Organizational Project Management Maturity Model (OPM3), developed by the Project Management Institute. The OPM3 is designed to help an organization support its organizational strategy from the project level on up through the portfolio and program levels. You can read more about it here: https://www.pmi.org/learning/library/grow-up-already-opm3-primer-8108. The ultimate goal of any maturity model is to help an organization change where change will introduce clear benefits. According to Joseph A. Sopko, “research from many sources continues to show that higher organizational maturity is synonymous with higher performance” (2015). As maturity models become more widely used, project-based organizations should factor in the market value of being recognized as a reliable supplier. If the organization’s maturity is lower than customer or market expectations, it may be viewed as a high-risk supplier that would add performance risk to its customers’ programs. And, obviously, if the organization’s maturity is lower than that of its competitors, it will lose competitive advantage since higher OPM maturity has been correlated with reliably delivering to plan and meeting customer expectations.(Sopko 2015) 14.4 Knowledge Management and Organizational Learning Many projects deliver tangible outcomes, such as physical artifacts, buildings, and infrastructure. Others produce software, reports, or other types of output. But all projects create knowledge. Indeed, this knowledge can end up being more valuable to the organization than any short-term financial gain. However, because intellectual capital is longer-term and intangible, it is often underappreciated at the point of creation. An organization that is fully committed to project management maturity does not make this mistake. On the contrary, it cultivates a culture of systematic knowledge management, which William R. King defines as follows: Knowledge management is the planning, organizing, motivating, and controlling of people, processes, and systems in the organization to ensure that its knowledge-related assets are improved and effectively employed. Knowledge-related assets include knowledge in the form of printed documents such as patents and manuals, knowledge stored in electronic repositories such as a “best-practices” database, employees’ knowledge about the best way to do their jobs, knowledge that is held by teams who have been working on focused problems, and knowledge that is embedded in the organization’s products, processes, and relationships. The processes of KM involve knowledge acquisition, creation, refinement, storage, transfer, sharing, and utilization. The KM function in the organization operates these processes, develops methodologies and systems to support them, and motivates people to participate in them. The goals of KM are the leveraging and improvement of the organization’s knowledge assets to effectuate better knowledge practices, improved organizational behaviors, better decisions, and improved organizational performance. Although individuals certainly can personally perform each of the KM processes, KM is largely an organizational activity that focuses on what managers can do to enable KM’s goals to be achieved, how they can motivate individuals to participate in achieving them, and how they can create social processes that will facilitate KM success. (2009) When done right, knowledge management leads to organizational learning, or the process of retaining, storing, and sharing knowledge within an organization. More than the sum of the knowledge of all the members of the organization, organizational knowledge “requires systematic integration and collective interpretation of new knowledge that leads to collective action and involves risk taking as experimentation” (Business Dictionary n.d.). Organizational learning as we define it here is a positive thing, a source of renewal for successful companies. But not all learning leads to good outcomes. Haphazard learning that occurs without any conscious evaluation can lead to bad habits and half-baked notions about best practices. As Daniel H. Kim explains, learning is an essential function of all organizations, but it’s not all productive: All organizations learn, whether they consciously choose to or not—it is a fundamental requirement for their sustained existence. Some firms deliberately advance organizational learning, developing capabilities that are consistent with their objectives; others make no focused effort and, therefore, acquire habits that are counterproductive. Nonetheless, all organizations learn. (1993) In Lesson 12, we discussed some important ways to contribute to organizational learning—capturing lessons learned during project closure, and taking part in communities of practice. These and other practices can help transform a company into a learning organization, which David A. Garvin defines as “an organization skilled at creating, acquiring, and transferring knowledge, and at modifying its behavior to reflect new knowledge and insights” (1993). Note that knowledge is only half of the equation. A true learning organization responds to knowledge by modifying its behavior: This definition begins with a simple truth: new ideas are essential if learning is to take place. Sometimes they are created de novo, through flashes of insight or creativity; at other times they arrive from outside the organization or are communicated by knowledgeable insiders. Whatever their source, these ideas are the trigger for organizational improvement. But they cannot by themselves create a learning organization. Without accompanying changes in the way that work gets done, only the potential for improvement exists. (Garvin 1993) Sharing Learning as Stories The authors of Becoming a Project Leader worked with several companies (Procter & Gamble, Motorola, NASA, Skanska and Turner, and Boldt) to create communities of practice. These organizations identified their best project managers to take part in a forum, which would meet 2-4 times per year for a day or two per meeting. Forum members submit stories before meeting, and a handful of those stories are then selected for discussion. At the meeting, stories are discussed and reflected upon and then eventually published and shared with the entire organization. Denise Lee extended the community of practice concept with her Transfer Wisdom Workshops at NASA to help serve “NASA’s practitioners who were not members of the community of practice and were located at NASA centers throughout the US.” As stated by Denise, “Our aim was to help the men and women who work on NASA projects step away from their work for a moment in order to better understand it, learn from it, and then share what they learned with others” (2003). The concept of the learning organization was first popularized by Peter Senge in the early 1990’s in his book The Fifth Discipline: The Art and Practice of the Learning Organization. Since then many researchers have investigated the role of learning in organizations. After over two decades of study and experimentation, the general consensus is that, to be effective, learning needs to be targeted at specific goals. Most importantly, according to Shlomo Ben-Hur, Bernard Jaworski, and David Gray, it should support the organization’s strategy: Too many corporate learning and development programs focus on the wrong things. A better approach to developing a company’s leadership and talent pipeline involves designing learning programs that link to the organization’s strategic priorities…. The word learning, which has largely replaced training in the corporate lexicon, suggests “knowledge for its own sake.” However, to justify its existence, corporate learning needs to serve the organization’s stated goals and should be based on what works. (Ben-Hur, Jaworski and Gray 2015) This is a good time to reflect back on Daniel H. Kim’s definition of learning as “increasing one’s capacity to take effective action.” It’s one thing for an individual to translate learning into effective action. It’s quite another for an organization made up of hundreds or thousands of individuals to accomplish the same thing. Despite millions of dollars invested in learning initiatives, organizations struggle to become learning organizations. In their article “Why Organizations Don’t Learn,” Francesca Gino and Bradley Staats discuss some barriers to learning that include 1) an excessive focus on success that prevents people from learning from failure, 2) and a tendency to rely on perceived experts rather than on the people who are on the front lines, dealing with and learning about a problem (2015). Another barrier to organizational learning is a tendency to view it as simply the acquisition of information (the know-how), without giving equal weight to the big-picture understanding (the know-why) that comes from actual experience at the individual, team, project, and corporate level. As a result, organizations as a whole, and the individuals within them, fail to realize that the best way to learn about a job is often by actually doing the job. It’s at the project level that individuals achieve growth and learning, and eventually succeed in reaching their goals. 14.5 Sustainability: Thick or Thin? As you look ahead for ways to expand your project management skills and knowledge, put learning about sustainability at the top of your list. First of all, you need to figure out where you and your organization stand on questions of sustainability. These days, organizations like to make big claims about their commitment to preserving natural resources, but in reality, their efforts often amount to little more than earnest public relations campaigns. In fact, they have no real interest in overturning the dominant paradigm, which sees the natural world solely as a supply of resources for human use. To come to terms with your ideas on sustainability, you need to understand your personal definition of the kind of value you want to create as an engineer. In his book The New Capitalist Manifesto, Umair Haque introduced the idea of thin and thick value. Thin value is consumerist (think McMansions and Hummers); often generated “through harm to or at the expense of people, communities, or society”; unsustainable because it is created with no regard to the environment; and, according to Haque, ultimately meaningless because “it often fails to make people, communities, and society durably better off in the ways that matter to them most” (2011, 19-20). By contrast, thick value is everything thin value is not. It is sustainable and meaningful over the long term, helping support communities and preserving the environment while allowing a business to generate a profit. Haque points to companies like Wal-Mart, Nike, and Starbucks as examples of thick-value enterprises. But let’s assume you and your organization share a very real commitment to sustainability. You still need to figure out the limits of your commitment in the face of financial realities. As a way of assessing individual or organizational approaches to sustainability, Robert O. Vos reinterpreted Haque’s ideas, defining thin and thick versions of sustainability. Thin sustainability views financial capital and natural capital (that is, natural resources) as equally important. It seeks “to ensure that the overall value of natural and financial capital must be undiminished for future generations, even if the mix of the two is allowed to change.” It assumes that “economic growth is highly desirable and has infinite potential; growth is assumed to occur due to the capacity of technology, through human ingenuity, to make more with less and…to make substitutes for destroyed natural capital” (2007). In other words, thin sustainability is buoyed by a faith in the power of technology to make up for the damage humans inflict on the environment. Thick sustainability takes a harder line, viewing any diminution of natural capital as unacceptable. Thick versions of sustainability look to redefine “how we measure economic growth; they may look to see reductions in growth rate or even reductions in the size of the economy. To mitigate this definition, thicker versions of sustainability often differentiate between growth and development. The focus here is on new ways of measuring the quality of life or of products, rather than as monetary values of economic output” (Vos 2007). So where do you stand on the thin/thick spectrum? And how about your organization? As you work to develop your personal project management maturity, you’ll need to think long and hard about these questions. To learn more, you can start by reading Becoming Part of the Solution: The Engineers Guide to Sustainable Development, by Bill Wallace. He encourages engineers to radically transform the way they work: Instead of finding ways to extract resources faster, we can be inventing and applying new technologies that use less material and energy. Instead of finding ways to sell more products, we can help clients get more service per unit of product. We can find ways to use natural systems to serve our needs for lighting, heating, and cooling. We can design buildings and other structures for flexibility in use, reuse, and recyclability, thereby reducing life cycle costs. Pursuing this course will bring about new engineering challenges, challenges that will force us to work smarter and call upon a broad set of skills and resources. These are the sorts of challenges that can attract young people into engineering, showing them how they can apply what they learn to make a difference in the world instead of following old and discouraging pathways. (2005, ix) Communicating Your Vision of Sustainability The ability to communicate effectively is essential in every part of an engineer’s job. But it is especially important in sustainable endeavors, which typically require a great deal of interaction between an organization and the general public. Such projects often hinge on the ability to get a wide array of stakeholders on board. Job one, then, is explaining exactly how your project will help society and protect the environment. Michael Mucha, Chief Engineer and Director for the Madison Metropolitan Sewerage District, and the current Chair for ASCE’s Committee on Sustainability, points out that Envision, a sustainability rating system for civil infrastructure, factors communication into its calculations: Whereas LEED is a sustainability rating system for habitable, vertical infrastructure. Envision is a rating system for horizontal, non-habitable infrastructure, like roads, wetlands restorations, airports, and water treatment facilities. It’s a way to evaluate how sustainable a project is. One measure for the Envision rating is how well you communicate with the public about the project. That illustrates the importance of communication in sustainable engineering. (2017) 14.6 Personal Project Management Maturity Taking the time to understand your organization’s project management maturity level offers a helpful corollary effect: it allows you to see your own personal development within a broader context, rather than seeing yourself as an isolated entity. You can’t really begin to pursue your larger professional goals until you understand where you fit into the big picture. If you find yourself working for a company with only the lowest level of project management maturity, you will likely have to lead the way to more effective project management processes, educating yourself in the process. If you work at a company with a well-established project management infrastructure, you will have more opportunities to learn from colleagues and upper management. Advice from a Microsoft Engineering Manager Ashwini Varma, principal group engineering manager at Microsoft, credits her desire to solve problems as a key to her success as a project manager. In an interview with Craig Lee, principal engineering manager at Microsoft, she shared some advice for maturing into an effective project manager. I’ve always had an innate drive to solve problems. When I see chaos, my first reaction is to organize. When I see pain, I want to heal it. This tendency made it natural for me to seek out roles in completely new areas, with new teams and new management. That wasn’t easy, but it gave me confidence to take on even more challenging work. In the process, I learned that you can’t force your will on a project team. You can’t start telling people who have already been working together what you want them to do now. Instead, you need to work deeply with a team, learn the technology, and develop a realistic understanding of what is possible. Only then can you start to comprehend how to build a sustainable, realistic plan, and only then can you establish your credibility with the team. Over time I also learned the importance of hiring the right team for the right problem. You can’t underestimate the importance of building the right team. The fact is, engineers are not interchangeable. You need to determine what you need to succeed, then hire engineers who can do that work. Of course, once you have the team you need, it’s essential to set up monitoring systems that keep you informed on their progress. I like to have multiple feedback loops that provide a picture of the project from different angles, and I encourage other managers on my teams to do the same thing. (2018) Whatever your situation, you need to commit to your own personal development. Here are some tips to help you pursue growth as a confident, competent project manager, and a leader in your organization. • Commit to the following practices you have learned about throughout this book: • Embracing living order tactics, using them whenever they are appropriate • Making reliable promises • Implementing Lean principles whenever they are appropriate • Maintaining a clear, sustained focus on value • Providing meaningful, current, and accurate information • Engaging constructively in difficult discussions and being willing to share bad news • Cultivating a culture of learning and adaptation on your project teams • Use pull planning instead of push planning whenever appropriate • Take advantage of formal and informal learning opportunities • Read the appendix to High Flyers: Developing the Next Generation of Leaders, by Morgan W. McCall: In the book’s appendix, “Taking Charge of your Development,” McCall includes a host of useful suggestions, checklists, and questionnaires. He also offers practical yet inspiring advice, such as the following: Perhaps the most crucial skill of all when it comes to personal growth is learning how to create a learning environment wherever you are. There is no pat formula, but there are some common-sense actions that might help. Treat people in ways that make them want to coach you, support you, give you feedback, and allow you to make mistakes. Seek out feedback on your impact, and information on what you might do differently. Experiment. Take time to reflect, absorb, and incorporate. (Morgan W. McCall 1998) • Write a “lessons learned” summary for each project: The post-course self-assessment and key take-aways document that you are assigned to complete at the end of this class are your opportunities to write the kind of reflective “lessons learned” summary that you should continue to create throughout your career. Even if your organization doesn’t require it, take the time to compile such an assessment at the end of each project or phase. Don’t waste time trying to write polished prose—just make notes about what did and didn’t work. As suggested in Lesson 12, you could make a short video or audio recording instead if that would be easier than putting your thoughts in writing. • Tell stories: Sharing stories with colleagues about past work experiences is an important part of professional development. Sometimes one well-told tale—perhaps shared over lunch or in an elevator on the way to a meeting—can teach more about how a company works than a week of classroom training. Take the time to listen to the stories your coworkers have to share. Consider keeping a list of insights gleaned from casual conversations over the course of a month. You’ll be surprised how much you learned when you thought you were doing something else. Peter Gruber’s seminal article, “The Four Truths of the Storyteller,” published in the Harvard Business Review, documents the power of stories to motivate and inspire: https://hbr.org/2007/12/the-four-truths-of-the-storyteller. • Cultivate a relationship with a trusted mentor: Having an external point of reference for honest feedback can be invaluable. When you think you have enough experience, offer to serve as a mentor for other people, sharing what you have learned, and staying alert to what you can learn from their experiences. Three Types of Mentorship In Becoming a Project Leader, Terry Little describes three types of mentorship. First is formal mentoring programs within organizations, which almost never work. As Terry explains, “Many so-called leaders fail to recognize that mentoring is as important as anything they do and more important than most of what they do.” The larger problem with formal mentoring, however, is the fact that mentees “are incentivized by external reward rather than a desire to improve and grow.” Next is informal mentoring, in which someone more senior in the company chooses mid-level managers. Terry’s approach: “I meet with each person I mentor regularly—nominally once a quarter. I also meet with everyone I mentor as a group once each six months. In between, I send articles or suggested readings, as well as some words of counsel that come to me. To me and to them it’s critical that these things be predictable and personal—something they can count on and that means something to them as diverse individuals.” The final type of mentorship is informal-informal mentoring. Terry explains: “As we progress up the career chain, our behaviors become more and more visible to an increasingly larger number of people. We are not conscious of it, but others take their cues from those higher up the bureaucratic pyramid than they are. They observe our behavior and make judgments about it. Is it something worth emulating? If so, how can I adapt that behavior to my unique personality? Is it something to avoid? If so, how do I sensitize myself so that I don’t do it unconsciously? Much of what we turn out to be as individuals derives from what we have learned from observing others—not from what others have told us, what we have read and so forth. When others seek to emulate us, we have mentoring at its finest. But when one sees basic leadership principles working effectively in real life, it can have a profound effect” (Little 2004). • Seek professional and personal experiences that broaden your skills: You can’t expect to learn much from familiar experiences, so look for things that take you a few steps outside your normal comfort zone. For example, direct, face-to-face interactions with customers and colleagues you don’t normally interact with will teach you volumes about how your organization works (and doesn’t work). • Don’t shy away from leadership roles: Leading projects is often the first step in the development path for a new manager. A project, whether big or small, offers a unique opportunity to enhance leadership skills without necessarily having direct authority over all team members. • Embrace challenges: Don’t shy away from difficult challenges just because you think they’ll make your life complicated. Think of new job assignments as opportunities for growth and development. This is especially true of new job assignments outside of engineering, in sales, marketing, or other areas. • Cultivate grit: Best-selling author Angela Duckworth argues that the secret to success is grit—that is, passion and perseverance in pursuit of very long-term goals. Gritty people display extraordinary stamina, work extremely hard, and are willing to pick themselves up after failure and try again. She explains her research on the topic in this six-minute TED talk: https://www.ted.com/talks/angela_lee_duckworth_grit_the_power_of_passion_and_perseverance#t-173940. • Be prepared to make the ethical choice: In Lesson 8 you read about the many factors affecting our perceptions of right and wrong. Often moral grey areas can make it hard to decide on the right course of action, so you have to lay the groundwork for ethical behavior ahead of time. Do your personal values align with the goals of your organization? Do they align with your individual projects? Take some time to discuss these questions with colleagues who have experience with similar situations. Also make sure you are familiar with the Code of Ethics for Engineers, published by the National Society of Professional Engineers, which is available here: https://www.nspe.org/sites/default/files/resources/pdfs/Ethics/CodeofEthics/Code-2007-July.pdf. Consider making a list of things you absolutely will never do. Then you can refer back to it in the future, when you’re wondering if a particular choice is the ethical one. This can be surprisingly effective in keeping you on the high moral ground. Protecting the Creative Process In his book Creativity, Inc., Ed Catmull, president of Pixar Animation and Disney Animation, describes the project management techniques that brought to life animation classics like Toy Story and The Incredibles. It all comes down to embracing the risks and uncertainties that allow true creativity to flourish: There are many blocks to creativity, but there are active steps we can take to protect the creative process…. The most compelling mechanisms to me are those that deal with uncertainty, instability, lack of candor, and the things we cannot see. I believe the best managers acknowledge and make room for what they do not know—not just because humility is a virtue but because until one adopts that mindset, the most striking breakthroughs cannot occur. I believe that managers must loosen the controls, not tighten them. They must accept risk; they must trust the people they work with and strive to clear the path for them; and always, they must pay attention to and engage with anything that creates fear. Moreover, successful leaders embrace the reality that their models may be wrong or incomplete. Only when we admit what we don’t know can we ever hope to learn it. (2014, xv-xvi) It might seem obvious that creativity is essential to entertainment companies like Pixar and Disney. But Catmull argues that protecting the creative process is essential in all types of organizations. He encourages managers to actively safeguard their teams’ creative abilities, thereby creating a safe space for team members to take risks, by doing the following: • Create a flat communication structure in which any person in the organization can talk to any other person, without regard to rank in the larger organizational structure. And strive for candor in project discussions. “Candor is forthrightness or frankness…. The word communicates not just truth-telling but a lack of reserve…. A hallmark of a healthy creative culture is that its people feel free to share ideas, opinions, and criticisms. Lack of candor, if unchecked, ultimately leads to dysfunctional environments” (2014, 86). • Constantly look for hidden problems, and don’t fall for the false notion that monitoring data can point out every possible issue. “‘You can’t manage what you can’t measure’ is a maxim that is taught and believed by many in both business and education sectors. But in fact, the phrase is ridiculous—something said by people who are unaware of how much is hidden. A large portion of what we manage can’t be measured, and not realizing this has unintended consequences. The problem comes when people think that data paints a full picture, leading them to ignore what they can’t see. Here’s my approach: Measure what you can, evaluate what you measure, and appreciate that you cannot measure the vast majority of what you do”(2014, 219-220). 14.7 Practical Tips for Organizational Development Here are some ideas to help you help your company mature into a more effective organization: • Model good behavior: Lead the way by modeling the practices you would like to see adopted throughout your organization. Start within your immediate circle of influence—the individuals you work with on a daily basis, the teams you belong to. Good ideas can be contagious, especially if people see them in practice and experience their benefits. • Develop a shared vision: Collaborate with like-minded and motivated colleagues in your organization to develop a plan for leading project management growth within your organization. Stay focused on changes that will deliver value, not processes that are ends in themselves. • Apply what you’ve learned about living order: Think about what you’ve learned in this course and make a list of ways you can use your new understanding of managing projects in living order to benefit your organization. Add this to your “key take-aways” for periodic review. • Compare your organization to other organizations: People often complain about their jobs, implying that no one does anything right. But that’s rarely true. Benchmark organizations that are similar to yours. How does your organization compare? You may find that your organization actually does many things better than the competition. If that’s the case, use your insights into your organization’s strengths as an impetus to improve in those areas even more. Learning from others outside your industry is another way to grow as an organization. Project management is a key skill that is used across different end markets, products, and processes. • Be mindful of the needs of your specific type of organization: Every organization is in a different stage of its development. A new start-up has different needs from an established company in a key industry. If you go to work for a new organization, you might find that basic project management processes and tools are nonexistent or immature. Indeed, entrepreneurs sometimes pride themselves on building hyper-flexible organizations in which fixed procedures and processes have no place. But as Wanda Curlee argues, “processes and procedures are not the antithesis of entrepreneurship and flexibility. In fact, project, program, and portfolio management can help a startup manage growth” (2015). You can read her complete article on the topic of startups and project management here: https://www.projectmanagement.com/blog-post/12961/Startups-and-Project-Management–They-Aren-t-Opposites. • Don’t focus on one project maturity model too early: Review multiple models for project maturity development. Compare their visions of project maturity, identify areas of growth that would improve your organization’s ability to consistently deliver successful project results. • Experiment: Learning through small experiments allows trial and error without significant negative repercussions. Piloting ideas for a project is one way to experiment. ~Summary • Personal and organizational development are processes of transformation—change that makes a person a more effective project manager, change that makes an organization a more successful company. • All organizational and personal change starts with learning. According to Daniel H. Kim, learning is a process of accumulating both know-how and know-why. • The term project management maturity refers to the “progressive development of an enterprise-wide project management approach, methodology, strategy, and decision-making process. The appropriate level of maturity will vary for each organization based on its specific goals, strategies, resource capabilities, scope, and needs” (PMSolutions 2012). A great many models and assessment tools have been created to measure project management maturity in every industry. Vital elements of project management maturity include a good knowledge management system and a culture that values learning at all levels. • Thin sustainability views financial capital and natural capital (that is, natural resources) as equally important. Thick sustainability takes a harder line, viewing any diminution of natural capital as unacceptable (Vos 2007). ~Glossary • Capability Maturity Model (CMM)—The first widely used maturity model, developed in the software industry in the late 1980’s by the Software Engineering Institute (SEI) at Carnegie Mellon University and the United States Department of Defense. • knowledge management—The “planning, organizing, motivating, and controlling of people, processes, and systems in the organization to ensure that its knowledge-related assets are improved and effectively employed” (King 2009). • learning—“Increasing one’s capacity to take effective action” (Kim 1993). • learning organization—According to David A. Garvin, “an organization skilled at creating, acquiring, and transferring knowledge, and at modifying its behavior to reflect new knowledge and insights” (1993). • mindfulness—A state of nonjudgmental awareness. • organizational learning—The process of retaining, storing, and sharing knowledge within an organization. More than merely the sum of the knowledge of all the members of the organization, achieving organizational knowledge “requires systematic integration and collective interpretation of new knowledge that leads to collective action and involves risk taking as experimentation” (Business Dictionary). • Organizational Project Management Maturity Model (OPM3)— The most widely recognized maturity model, developed by the Project Management Institute. The OPM3 is designed to help an organization support its organizational strategy from the project level on up through the portfolio and program levels. • project management maturity—The “progressive development of an enterprise-wide project management approach, methodology, strategy, and decision-making process. The appropriate level of maturity will vary for each organization based on its specific goals, strategies, resource capabilities, scope, and needs” (PMSolutions 2012). • project maturity model—A set of developmental stages that can be used to evaluate an organization’s state of maturity in a particular domain.
textbooks/biz/Management/Book%3A_Technical_Project_Management_in_Living_and_Geometric_Order_(Russell_Pferdehirt_and_Nelson)/1.14%3A_Personal_and_Organizational_Project_Management_Growth.txt
The problem is not that there are problems. The problem is expecting otherwise and thinking that having problems is a problem. —Theodore Rubin, American psychiatrist and author Objectives After reading this chapter, you will be able to • List some new practices for project management based on the first fourteen lessons in this book • Discuss James March’s ideas on thinking like a poet to become a better project manager • Explain the difference between event-driven and intention-driven project management • Get started creating your own professional development plan The Big Ideas In this Lesson • As you look to the future, keep in mind that a living order approach to personal development focuses on lifelong learning, rather than understanding only what you need to know to master the current situation. • A poet’s ability to interpret events, and to tell other people what they mean, is extremely useful in living order, where unexpected events unfold every day, unforeseen by even the most carefully constructed project plan. • After formulating an interpretation of events, project managers must come up with a solution, circling back to revise their interpretation of the situation as necessary. • The event-driven nature of project management, which can pull your attention in a dozen different directions at once, can leave little time and energy for planning and acting on your long-term career goals. It’s essential to take time to plan your professional development. 15.1 Reassess and Plan for the Future This last lesson is an opportunity for you to look back at your understanding of project management at the beginning of this book; assess how your abilities align with your new, expanded conception of project management; and develop a plan for incorporating these new ideas and practices into your work. In other words, this is an opportunity to plan for change. A short list of new practices for readers of this book might include • Recognize when a geometric order approach is useful in a project, and when a more flexible, living order approach is best • Look for opportunities to incorporate Lean or Agile principles into your projects • Make decisions about new projects in relation to your organization’s overall project portfolio • Identify what makes a particular project successful, with a focus on the customer’s definition of value, and communicate that to all stakeholders. • Limit the amount of detail in a project plan and schedule to the amount required to effectively guide the project team • Be prepared to adapt and improvise in fast-changing situations, rather than attempting to stick to a plan that is no longer relevant • Employ monitoring and control strategies that provide the right amount of information, targeted to the people who need that information to make on-going decisions about the project • Regularly conduct project-review and project-closure activities, so as to keep current projects on track and retain vital information for future projects • Take advantage of learning opportunities wherever they arise throughout your career As you have read throughout this book, these are all important and effective ways of keeping projects on-track and headed toward success in the unpredictable, living-order conditions of the modern world. Hopefully, you have already begun changing your approach to project management to incorporate some or all of these practices. You might also hope to lead your entire organization toward some essential changes as well. So, let’s take some time to think about the nature of change and the mind-expanding possibilities of new ideas. It all starts with seeing the world differently. 15.2 Technical Project Manager as Plumber and Poet In his novel 1984, George Orwell describes a dystopian state in which human thought is controlled by Newspeak, the state’s official language, which citizens are forced to adopt. Because Newspeak lacks words like “freedom” and “justice,” thoughts about such things gradually become “literally unthinkable, at least so far as thought is dependent on words…. Newspeak was designed not to extend but to diminish the range of thought, and this purpose was indirectly assisted by cutting the choice of words down to a minimum” (Orwell). By restricting the ability of citizens to think thoughts that might upset the current state of affairs, the government is able to prevent revolution from taking root. In his novel, Orwell is making points about the nature of repressive political states. But he’s also commenting on the power of any organization to restrict the way its members think by controlling the official terminology and sanctioned procedures for getting things done. In any large organization, giving an honest assessment of any situation can be exceedingly difficult, especially when the organization itself wants you to see things differently. In an interview with the Harvard Business Review, James March, one of the seminal scholars and thinkers on organizational theory, describes the predicament of astute modern managers, who perceive uncertainty all around, but are compelled by the pressures of organizational thinking to blind themselves to that reality: The rhetoric of management requires managers to pretend that things are clear, that everything is straightforward. Often, they know that managerial life is more ambiguous and contradictory than that, but they can’t say it. They see their role as relieving people of ambiguities and uncertainties. They need some way of speaking the rhetoric of managerial clarity while recognizing the reality of managerial confusion and ambivalence. (Coutu 2006) In a collection of his lectures, James March explains that to avoid this kind of self-imposed blindness, managers need to combine their natural plumber’s tendency—the tendency to zero in on problems and fix them—with the poet’s bold, creative approach to the world: There are two essential dimensions of leadership: “plumbing,” i.e., the capacity to apply known techniques effectively, and “poetry,” which draws on a leader’s great actions and identity and pushes him or her to explore unexpected avenues, discover interesting meanings, and approach life with enthusiasm. The plumbing of leadership involves keeping watch over an organization’s efficiency in everyday tasks, such as making sure the toilets work and that there is somebody to answer the telephone. This requires competence, not only at the top but also throughout all the parts of the organization; a capacity to master the context (which supposes that the individuals demonstrating their competence are thoroughly familiar with the ins and outs of the organization); a capacity to take initiatives based on delegation and follow-up; a sense of community shared by all members of the organization, who feel they are “all in the same boat” and trust and help each other; and, finally, an unobtrusive method for coordination, with each person understanding his or her role sufficiently well to be able to integrate into the overall process and make constant adjustments to it…. Leadership also requires, however, the gifts of a poet, in order to find meaning in action and render life attractive. The formulation and dissemination of interesting interpretations of reality form the basis for constructive collective action…. Words allow us to forge visions, and poetic language, through its evocative power, allows us to say more than we know, to teach more than we understand. (March and Weil 2005) Thinking like a poet opens the door to the kind of personal change that will make you a better project manager. At the same time, thinking like a poet will give you the ability to inspire change in your organization. As you’ll see in the next section, a poet’s ability to interpret events, and to tell other people what they mean, is extremely useful in living order, where unexpected events unfold every day, unforeseen by even the most carefully constructed project plan. You Already Think Like a Poet James March has suggested that managers could benefit from reading poetry, which forces readers to marshal their powers of interpretation, looking for multiple layers of meaning in any one word (Coutu 2006). But whether or not you are interested in reading poetry, you should at least be aware that living order is ultimately a poetic idea, developed by the French philosopher Henri Bergson. When he first used the term, in his book Creative Evolution, Bergson was talking about the artistic process, which appears chaotic from the outside, but can produce works of extraordinary order and complexity (1911). Living order is a complicated idea, and at first blush it doesn’t even make sense. How can order be alive? What does that mean? Hopefully, after fourteen lessons, you do have a sense of what it means. You probably even feel comfortable using the term “living order” to identify certain phenomena in your professional life. That is, you reinterpreted two English words—“living” and “order”—and, as a result, internalized a new understanding of the world. In other words, you have begun thinking like a poet. 15.3 Event-Driven and Intention-Driven Project Management As an engineer, you are probably inclined toward the plumbing tasks associated with leadership. After all, almost by definition, engineers like to fix things. And you might think that the poet part of the equation is something entirely new to you. But the work of Swedish researchers Ingalill Holmberg and Mats Tyrstrup suggests that employing a poet’s interpretive skills—that is, looking at a situation and telling other people what it means—is something good managers do every day. You probably have more experience at it than you think. Holmberg and Tyrstrup developed their theory when studying everyday leadership—that is, the decisions and activities that take up the vast majority of a manager’s time. In interviews with managers at TECO, the Swedish international telecom company, they found that only 10% of projects were completed in the traditional, geometric way, with events unfolding according to plan. Another 20% are driven by a manager who saw himself or herself as heroic for solving unexpected problems and forcing the project to unfold as originally planned. The researchers noted that managers especially loved to describe a project as “a story of heroic feats.” They explain that many managers tend to describe their efforts according to this model. They begin with a challenging problem (which, by the way, is much bigger than initially expected). A process follows that includes many difficult turns. Knotty problems arise, and at times everything looks bleak—very bleak indeed. But the competent manager has a basic agenda consisting of a number of stages to follow and steps to take. In hindsight, it can be claimed that the whole process has gone according to plan and a successful conclusion has been reached. (54) Managers who describe their successes in this way tend to think leadership is largely a matter of knowing “today what should be done tomorrow in order to reach the desired results” (54). In other words, they see projects as intention-driven. They believe that a heroic manager with clearly defined intentions can make anything happen. But according to Holmberg and Tyrstrup, managers who view themselves in this heroic light are deceiving themselves, because the vast majority of a manager’s time is spent on problems that nobody did or could expect. The researchers call these “Well then—what now?” situations. They argue that almost all of a manager’s time is spent trying to answer that question—and not because things were poorly planned at the outset, but because that’s just how things work in a complicated organizational setting. As much as heroically-inclined managers would like to believe that their own intentions are the most powerful force in any project, in reality projects are nearly always event-driven, with managers forced to respond to changing situations from day to day. “Either something unexpected happens, or what was expected to happen does not” (58). Using a Time Management Quadrant As you turn your attention from one “Well then—what now?” situation to another, it’s easy to lose track of priorities. In particular, you might fail to leave time for the large-scale, sense-making thinking required to keep a project on track over the long term. In The 7 Habits of Highly Effective People, Stephen Covey recommends using a time management quadrant, like the one shown in Figure 15-1. Make a diagram like this, and then keep track of how you spend your time, writing each activity into the appropriate quadrant (Covey 1989). Figure 15-1: Time management quadrant People tend to spend most of their time on Quadrants I and III, neglecting Quadrant II. Thus, long term planning and sense-making (which are not urgent but very important) tend to fall by the wayside. For some suggestions on how to use a time management quadrant to eliminate pointless, Quadrant IV tasks from your work life in order to focus more on the essential Quadrant II tasks, see this helpful article: https://www2.usgs.gov/humancapital/documents/TimeManagementGrid.pdf. Holmberg and Tyrstrup describe a “Well then—what now?” project as follows: You find yourself in a problematic situation, working hard and wrestling with the issues as they appear, only to find you are constantly trying to grasp the situation. It is not at all certain how you got where you are or what the situation means. It is extremely difficult to assess how the situation fits with the intentions articulated a few days, a week, or a month ago. It is hard to tell what has been completed, what is still going on, or what is yet to be accomplished. People are constantly at your throat, asking for different instructions or directions. People higher up in the hierarchy, those lower down, and even those at the same level want information and reports that give the results of decisions taken and activities performed. One event seems to give rise to another according to a logic that is anything but obvious. As a manager, you are tired and need a break to go through your papers, emails, the heaps of files, and the phone messages in order to sort out your thoughts and feelings. (55) In a situation like this, the manager’s main job is to interpret what’s going on—that is, make sense of the situation: In each case, the manager had to interpret what had already happened in order to formulate what the next step should be. What was the significance of what had or hadn’t happened? How might these events and non-events be best explained, and what are their implications? (59) After formulating an interpretation, managers must come up with a solution, circling back to revise their interpretation of the situation as necessary. This brings us back to March’s idea that a leader is partly a plumber and partly a poet. A “Well then—what now?” problem can only be solved by a manager with a poet’s ability to interpret the situation, to see into the heart of the matter and explain to everyone else what’s going on. Then, acting like a plumber, the manager needs to figure out a way to solve the problem. Often, solving the problem also requires quite a bit of poetic creativity and vision. Holmberg and Tyrstrup conclude their study with some practical suggestions designed to nudge large organizations away from the assumption that unexpected events can be prevented by more detailed planning, which can be extremely time-consuming and costly. Instead, they argue, organizations should focus on hiring managers who can deal with the unexpected. Their research implies that organizations should focus on “selecting managers who are prepared to give up unilateral control and instead to rely on the creativity inspired by improvised actions” (65). Likewise, management training, they argue, should focus on helping managers come up with creative solutions to the question “Well then—what now?” As you look to your future as a technical project manager, be alert to your own tendency to see yourself as a hero in the midst of chaos. Instead, remember that resolving “Well then—what now?” situations is the main job of a project manager. You need to be able to deal with them creatively and with as little drama as possible. Becoming an Expert Technical Project Manager: the 10,000 Hour Rule, Revised You might have heard people talk about the 10,000 hours rule. As popularized by Malcolm Gladwell in his book Outliers, the rule holds that it takes 10,000 hours of practice to become an expert at anything, including project management. But according to Anders Ericsson, the psychologist whose research inspired the maxim, simply doing something over and over won’t lead to true expertise. Instead, you need to actively correct your performance to achieve real excellence. Maria Popova summarizes his findings: The secret to continued improvement, it turns out, isn’t the amount of time invested but the quality of that time. It sounds simple and obvious enough, and yet so much of both our formal education and the informal ways in which we go about pursuing success in skill-based fields is built around the premise of sheer time investment. Instead, the factor Ericsson and other psychologists have identified as the main predictor of success is deliberate practice—persistent training to which you give your full concentration rather than just your time, often guided by a skilled expert, coach, or mentor. (n.d.) So, to become an expert technical project manager, you need to invest time in constant learning and training. But you need to do this with an active attention toward self-improvement. To avoid “ceasing to grow and stalling at proficiency level … you need to continually shift away from autopilot and back into active, corrective attention” (Popova). You can read Popova’s excellent summary of the latest research on the 10,000 hours rule here: https://www.brainpickings.org/2014/01/22/daniel-goleman-focus-10000-hours-myth/. And while you’re at it, consider subscribing to her email newsletter, which explores a huge variety of topics related to culture, history, and art. It’s a wonderful way to learn about issues that lie beyond the boundaries of the engineering world, making it an excellent professional development resource. Reading regularly will ensure that you’re familiar with all the big topics currently circulating in the culture. You can subscribe here: https://www.brainpickings.org/newsletter/ 15.4 Creating a Professional Development Plan This 12-minute video provides a helpful introduction to the process of creating a professional development plan: https://www.youtube.com/watch?v=PRZcstlx6KQ . In this book, you’ve learned about the importance of planning to ensure a technical project’s success. The same is true of your career. Unfortunately, the event-driven nature of project management, which can pull your attention in a dozen different directions at once, can leave little time and energy for planning and acting on your long-term career goals. As a result, some project managers end up moving from one job to another with no real plan in mind. In other words, they fall into the trap of tolerating an event-driven series of positions at various organizations, rather than insisting on an intention-driven, goal-oriented career. The first step in taking control of your career is creating a professional development plan(PDP), which is a document that describes 1. Your current standing in your field, including a brutally honest assessment of your strengths and weaknesses. Use the Project Management Self-Assessment form provided in Figure 15-2 to begin holding yourself to account. 2. Your short- and long-term career goals. Creating a list of goals typically involves a fair amount of research, so that you can be sure you fully understand the options available to you. 3. A plan for achieving your goals that includes specific deadlines. Again, this part of your professional development plan will require some research, so that you fully understand the best possible ways to achieve your goals. For example, you might want to investigate useful certifications or professional conferences. Figure 15-2: Project Management Self-Assessment Download Figure 15-2 in PDF form To create a meaningful development plan, you also need to engage a trusted mentor and perhaps a few valued colleagues. Connect with people who are willing to share experiences with you, who understand the big picture, and who can give you honest assessments of your strengths and weaknesses. Definitely take advantage of any formal mentorship programs available through your organization or in your field. However, according to Terry Little in Becoming a Project Manager, you are likely to get the best results through more informal mentoring arrangements with willing senior employees. It’s possible a senior manager who takes mentoring seriously will approach you about establishing a mentor/mentee relationship, but if that doesn’t happen, don’t be afraid to seek out your own mentors. But what makes a good mentoring relationship? According to Little, the following principles are a good foundation: 1. Mentors must be willing to spend time doing it. 2. Mentees must be willing to learn. 3. Mentoring is everyone’s responsibility, not just the responsibility of those in senior positions. 4. Advice to mentees should be predictable and personal. 5. With any position you hold, your behavior should be worthy of emulation. (2018) No matter how much work you and your mentor put into your professional development plan, the plan is only useful if you actually monitor your progress in achieving your goals and take the time to update the plan throughout your career. Whereas projects are team efforts involving collaboration among many parties, your professional development is entirely your responsibility. In an article for Forbes, Chrissy Scivicque emphasizes that while your organization’s human resources department might help you create a plan, executing it successfully is really up to you: Your professional development is not the responsibility of anyone but you. Not your company, not your boss, not even your coach. Just you. Some companies try to help with the process by helping employees create professional development plans (PDP) as part of the performance review process. While it’s a nice gesture, it simply isn’t very useful for the vast majority of employees. In my experience, I’ve found that a PDP created at the behest of an employer is often an exercise for management, not the employee. In fact, if the employee will later be judged on that criteria, he or she actually feels encouraged to aim low so as not to be set up for future failure. For those who happen to have bigger goals that don’t involve working for the company, the PDP is pretty meaningless. The employee ends up playing a game, telling the manager what he wants to hear and not using the plan to facilitate real, desired professional growth. Even if your company helps you develop a plan, it’s always a smart idea to create one of your own in private. This will help you identify and take action on growing the skills needed to achieve your true long-term career goals, whether or not they involve your current company. (2011) In her article, Scivicque also emphasizes that a professional development plan is only useful if you revise it regularly to reflect new opportunities and challenges, as well as your own changing aspirations. You can read her complete article here: http://www.forbes.com/sites/work-in-progress/2011/06/21/creating-your-professional-development-plan-3-surprising-truths/#5cd14a4627bb. Experience, Reflection, and Mentoring In their book Becoming a Project Leader, Alexander Laufer, Terry Little, Jeffrey Russell, and Bruce Maas provide real-life case studies of people struggling with and growing into the role of project manager. They explain that “the large sample of project managers we studied did not become successful due to intensive and formal classroom education. Rather, the primary means for their development was on-the-job learning” (109). According to their research, the three most important avenues for this vital form of learning are pursuing challenging tasks to gain experience, working with a mentor, and learning through communities of practice. “Project managers develop as successful leaders by employing a variety of practices which are from bottom to top (the project manager tackling challenging tasks and affecting the organization), top to bottom (mentoring), and across the organization (community of practice)…. If an organization is to grow and weather the inevitable ups and downs it will face in a dynamic environment, professional development is essential” (127). One of the great benefits of taking part in a community of practice is that it offers a low-pressure setting in which people can talk about their work, usually in the form of stories. The ability of stories to transfer knowledge and wisdom among people cannot be overemphasized. “People love to read stories because they attract and captivate, can convey a rich message in a non-threatening manner, and are memorable. Stories are thus the most effective learning tool at our disposal, especially in situations where the prospective learner suffers from a lack of time—which is the case for most project managers” (Laufer et al. 2018, 121-122). All of these professional development tactics draw on the 70/20/10 model for learning and development, which holds that 70% of learning comes from challenging assignments, 20% comes from relationships with coworkers including mentors and communities of practice, and 10% from formal training. You can learn more about the 70/20/10 model here: https://trainingindustry.com/wiki/content-development/the-702010-model-for-learning-and-development/. 15.5 The Future of Technical Project Management One important part of planning for your professional development is keeping an eye on trends that will shape technical project management in the coming years. Technological advances, expanding globalization, and new communication and data systems will all change how technical project managers do their jobs. Here’s a summary of emerging trends, along with links for more information: • Data analytics: Businesses are increasingly collecting mountains of data on quality assurance testing, customer behaviors and preferences, in-field equipment performance, warranty claims, and so on. These data can drive the justification for projects, help to focus project efforts, and inform project progress. Project managers will need to cultivate their ability to extract meaningful information from otherwise overwhelming stores of data, and then use that information to make crucial decisions and to shape day-to-day project management. These articles explain how to use data analytics to improve project outcomes: • Business Agile: The Agile development model has leapt over the borders of IT projects into the larger world. Companies are now incorporating Agile principles into “the whole of a company’s function,” with massive implications for how business decisions are made and plans are executed (Burger, Business Agile and the Future of Project Management 2017). Among other innovations, this new approach to business emphasizes continuous learning, information-sharing among departments, and incentives that “support measuring outcomes, making evidence-based decisions, and learning” (Gothelf 2014). Read this blog post for an in-depth look at the implications of business Agile: https://blog.capterra.com/business-agile-and-the-future-of-project-management/. This article from the Harvard Business Review makes the case for weaving Agile-thinking into everything a company does: https://hbr.org/2014/11/bring-agile-to-the-whole-organization. The website for the Agile Business Consortium is an excellent source of resources on the topic: www.agilebusiness.org/business-agility. • Diversity initiatives: According to Rachel Burger, the world of project management has been slow to make hiring a diverse group of employees a priority, even while other fields have been making major strides in this area. But a trend toward diversity “is trickling in from the business community and the political climate as a whole.” As a result, project managers can expect to hear more about the need for diverse teams—that is, teams that include an equal number of men and women, with representation from as many races, cultures, sexual orientations and religions as possible, and built-in measures for accommodating disabilities. But these changes will not occur in a vacuum. You should expect lots of arguments about the best way to proceed. Burger advises project managers to “take advantage of new community offerings about diversity and inclusiveness, and get ready for industry-level conflicts about people management in regards to ability, age, ethnicity, gender, race, religion, sexual orientation, and class” (2017). You’ve read about the benefits of having a diverse team in Lesson 5. This article summarizes these benefits: https://www.liquidplanner.com/blog/the-new-secret-to-successful-teams-diversity/. This article from PMI explains how to overcome misunderstandings that can arise on multicultural teams: https://www.pmi.org/learning/library/dealing-cultural-diversity-project-management-129. • The internet of things and artificial intelligence: Advances in technology will affect every business in the world, one way or another (Burger 2017). This is definitely true of the internet of things (IoT), which is the “system of interrelated computing devices, mechanical and digital machines, objects, animals or people that are provided with unique identifiers (UIDs) and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction” (TechTarget n.d.). This article by Anna Johansson describes the many ways IoT has and will continue to change modern business: https://www.huffingtonpost.com/anna-johansson/8-ways-the-internet-of-th_b_11763836.html. You probably already have experience with forms of artificial intelligence (AI) such as Amazon’s Alexa, or the Apple’s Siri. But these automated personal assistants are just the tip of the iceberg. This article describes ways that AI will change business in the near future: http://usblogs.pwc.com/emerging-technology/8-ways-ai-will-change-work/. • Globalization: The Levin institute at the State University of New York (SUNY) defines globalization as “a process of interaction and integration among the people, companies, and governments of different nations, a process driven by international trade and investment and aided by information technology. This process has effects on the environment, on culture, on political systems, on economic development and prosperity, and on human physical well-being in societies around the world” (The Levin Institute n.d.). Globalization has already had a profound effect on the way the world does and will continue to do business in myriad ways. In a NASA roundtable discussion, Greg Balestrero discussed the effects of globalization on the supply chain: “It’s very difficult to think of any company or organization that doesn’t feel the pressures and the implications of globalization on the supply chain. And it’s an intellectual supply chain as well as a physical supply chain. The global supply chain is a growing issue…. With globalization comes a challenge of having a common framework and understanding—as simple as a lexicon, as complicated as a common process—for project and program management” (APPEL News Staff 2007). You can learn more about this essential topic at www.globalization101.org, a website maintained by the Levin Institute. Other issues that will continue to have a huge effect on technical project management in the coming years include • Lean and Agile: According to expert John Shook, despite almost two decades of effort, the construction industry is only in the early stages of effectively implementing Lean principles in all phases of construction (Wiegand 2016). Likewise, IT professionals still face many challenges in their quest to take advantage of Agile principles and practices in an organizationally appropriate and effective manner. • Emerging technology: An article in Engineering News-Record describes the use of hologram headsets that allow a worker to see a 3D model of what she needs to build directly on site, making it possible to begin assembling part of a building without even referring to a tape measurer (Rubenstone 2016). This is just one example of the fast-moving technological changes that will affect your work as a technical project manager in the near future. • Partnerships: As the tendency toward globally interconnected businesses intensifies, formal international business partnerships will become the norm rather than the exception. To successfully manage projects that span multiple countries, you’ll need to focus on your cross-cultural competencies, making sure you are prepared to interact with people from all over the world. ~Practical Tips This set of practical tips summarizes the advice you’ve read in earlier lessons. To help promote an understanding of the role of geometric and living order in your organization’s projects, consider printing this list and posting it somewhere where your colleagues and project stakeholders can easily read it. And be ready to discuss these ideas with anyone who asks about them. • Throughout a project, recognize the tension that exists between geometric and living order, and avoid imposing a geometric process on a situation that requires a more flexible, living order approach. • Put as much effort as you possibly can into starting a project well because the way you start a project has a big impact on how you finish. • In all stages of a project, take time to remind stakeholders how the customer perceives the project’s value. Make sure everyone involved can clearly articulate the customer’s definition of the project’s value. • Continually work toward building a functional, collaborative team. Don’t waste time trying to achieve the impossibility of a perfect team. • Do all you can to make sure all project stakeholders understand the definition of project success. • Use the planning part of any project as an opportunity for thinking and collaboration. Focus less on the plan itself and more on starting the dialogue with all stakeholders. • Do not let a project evolve without continually referring to and connecting back to your organization’s overall strategy. At every stage of planning and executing a project, incorporate strategic thinking. • In all procurement-related tasks, focus on best value rather than least initial cost. • Use the scheduling part of any project as an opportunity to think about project tasks at varying levels of detail, and as an opportunity to communicate with stakeholders about the best way to achieve project success. • Don’t shy away from confronting the uncertainty in any project. Only by understanding the many forms of uncertainty associated with a project can you understand the degree of risk involved. • Accept the fact that resources are usually scarce and constrained, and use your project management skills to use those scarce and constrained resources effectively. • In a dynamic, changeable environment, move beyond the traditional view of monitoring and control, which emphasizes gathering data about the past, and instead adopt a pull approach to monitoring and control, which emphasizes data about the current time and the immediate future. Finally, here are some concluding practical tips on management and leadership, adapted from the work of Alexander Laufer, whose book Mastering the Leadership Role in Project Management (2012) has provided a wealth of inspiration for these lessons. • Always keep the context in mind: Principles and practices must be modified to fit the context of a project situation. • Adapt when necessary, instead of attempting to control everything: Projects are plagued with questions and problems. A successful manager has the flexibility to adapt as necessary to address these matters and not simply strive to control them. • Be prepared to manage and lead: In some ways, a technical project requires the same combination of management and leadership as driving a car. You need to remain aware of everything going on with the controls on the dashboard (management) while at the same time looking out the windshield to make sure you reach your destination (leadership). • Be prepared for a shift from living order to geometric order, once things get going: Many projects start in living order (and a high degree of uncertainty) and transition to geometric order. • Don’t forget the beauty of AND: Project management often involves combining two different activities or ways of thinking about a project, such as leadership AND management, stability AND flexibility, processes AND practices, thinking AND doing. • Look for ways to collaborate at all times: The primary role of a project manager is to build collaboration, interdependence, and trust among the project stakeholders. • Think of yourself as a problem solver: Successful project managers develop expertise in problem identification and solving. • Remember, everything you do or learn adds to your wealth of knowledge and experience: Seek out new ways to add to your practical experience and overall knowledge. Job assignments are one obvious way to do this, but don’t forget other options, such as mentoring relationships, and stories told by your colleagues about past projects. ~Summary • New project management practices for readers of this book include recognizing when a geometric order approach is best and when a living order approach is best, being prepared to adapt and improvise, and limiting the amount of detail in a project plan and schedule to the amount required to effectively guide the project team. • According to James March, an effective leader knows how to work like a plumber, by “keeping watch over an organization’s efficiency in everyday tasks,” and also like a poet, who strives to “explore unexpected avenues, discover interesting meanings, and approach life with enthusiasm” (March and Weil 2005). • Many managers think leadership is largely a matter of knowing “today what should be done tomorrow in order to reach the desired results” (Holmberg and Tyrstrup 2012, 54). In other words, they see projects as intention-driven. But according to Swedish researchers Ingalill Holmberg and Mats Tyrstrup, managers who view themselves in this heroic light are deceiving themselves, because the vast majority of a manager’s time is spent on problems that nobody did or could expect. In other words, projects are nearly always event-driven, with managers forced to respond to changing situations from day to day (58). • The first step in taking control of your career is creating a professional development plan (PDP), which is a document that describes your current standing in your field, your short- and long-term career goals, and a plan for achieving your goals, including specific deadlines. ~Glossary • event-driven—Term used to describe a project that unfolds in response to changing events. • intention-driven—Term used to describe a project that unfolds according to the single-minded intention of the project manager. • Internet of things (IoT) —The “system of interrelated computing devices, mechanical and digital machines, objects, animals or people that are provided with unique identifiers (UIDs) and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction” (TechTarget n.d.). • professional development plan (PDP)—A document that describes 1) your current standing in your field, including a brutally honest assessment of your strengths and weaknesses; 2) your short- and long-term career goals; and 3) a plan for achieving your goals that includes specific deadlines.
textbooks/biz/Management/Book%3A_Technical_Project_Management_in_Living_and_Geometric_Order_(Russell_Pferdehirt_and_Nelson)/1.15%3A_Putting_It_All_Together.txt
Today’s U.S. corporate governance system is best understood as the set of fiduciary and managerial responsibilities that binds a company’s management, shareholders, and the board within a larger, societal context defined by legal, regulatory, competitive, economic, democratic, ethical, and other societal forces. Shareholders Although shareholders own corporations, they usually do not run them. Shareholders elect directors, who appoint managers who, in turn, run corporations. Since managers and directors have a fiduciary obligation to act in the best interests of shareholders, this structure implies that shareholders face two separate so-called principal-agent problems—with management whose behavior will likely be concerned with its own welfare, and with the board, which may be beholden to particular interest groups, including management.Agency theory explains the relationship between principals, such as shareholders and agents, like a company’s executives. In this relationship, the principal delegates or hires an agent to perform work. The theory attempts to deal with two specific problems: first, that the goals of the principal and agent are not in conflict (agency problem) and second, that the principal and agent reconcile different tolerances for risk. Many of the mechanisms that define today’s corporate governance system are designed to mitigate these potential problems and align the behavior of all parties with the best interests of shareholders broadly construed. The notion that the welfare of shareholders should be the primary goal of the corporation stems from shareholders’ legal status as residual claimants. Other stakeholders in the corporation, such as creditors and employees, have specific claims on the cash flows of the corporation. In contrast, shareholders get their return on investment from the residual only after all other stakeholders have been paid. Theoretically, making shareholders residual claimants creates the strongest incentive to maximize the company’s value and generates the greatest benefits for society at large. Not all shareholders are alike and share the same goals. The interests of small (minority) investors, on the one hand, and large shareholders, including those holding a controlling block of shares and institutional investors, on the other, are often different. Small investors, holding only a small portion of the corporation’s outstanding shares, have little power to influence the board of the corporation. Moreover, with only a small share of their personal portfolios invested in the corporation, these investors have little motivation to exercise control over the corporation. As a consequence, small investors are usually passive and interested only in favorable returns. They often do not even bother to vote; they simply sell their shares if they are not satisfied. In contrast, large shareholders often have a sufficiently large stake in the corporation to justify the time and expense necessary to monitor management actively. They may hold a controlling block of shares or be institutional investors, such as mutual funds, pension plans, employee stock ownership plans, or—outside the United States—banks whose stake in the corporation may not qualify as majority ownership but is large enough to motivate active engagement with management. It should be noted that the term “institutional investor” covers a wide variety of managed investment funds, including banks, trust funds, pension funds, mutual funds, and similar “delegated investors.” All have different investment objectives, portfolio management disciplines, and investment horizons. As a consequence, institutional investors both represent another layer of agency problems and opportunity for oversight. To identify the potential for an additional layer of agency problems, ask why we should expect that a bank or pension fund will look out for minority shareholder interests any better than corporate management. On the one hand, institutional investors may have “purer” motives than management—principally a favorable investment return. On the other hand, they often make for passive, indifferent monitors, partly out of preference and partly because active monitoring may be prohibited by regulations or by their own internal investment rules. Indeed, a major tenet of the recent governance debate is focused on the question of whether it is useful and desirable to create ways for institutional investors to take a more active role in monitoring and disciplining corporate behavior. In theory, as large owners, institutional investors have a greater incentive to monitor corporations. Yet, the reality is that institutions failed to protect their own investors from managerial misconduct in firms like Enron, Tyco, Global Crossing, and WorldCom, even though they held large positions in these firms. The latest development in the capital markets is the rise of private equity. Private equity funds differ from other types of investment funds mainly in the larger size of their holdings in individual investee companies, their longer investment horizons, and the relatively fewer number of companies in individual fund portfolios. Private equity managers typically have a greater degree of involvement in their investee companies compared to other investment professionals, such as mutual fund or hedge fund managers, and play a greater role in influencing the corporate governance practices of their investee companies. By virtue of their longer investment horizon, direct participation on the board, and continuous engagement with management, private equity managers play an important role in shaping governance practices. That role is even stronger in a buyout or majority stake acquisition, where a private equity manager exercises substantial control—not just influence as in minority stake investments—over a company’s governance. Not surprisingly, scholars and regulators are keeping a close watch on the impact of private equity on corporate performance and governance. State and Federal Law Until recently, the U.S. government relied on the states to be the primary legislators for corporations. Corporate law primarily deals with the relationship between the officers, board of directors, and shareholders, and therefore traditionally is considered part of private law. It rests on four key premises that define the modern corporation: (a) indefinite life, (b) legal personhood, (c) limited liability, and (d) freely transferable shares. A corporation is a legal entity consisting of a group of persons—its shareholders—created under the authority of the laws of a state. The entity’s existence is considered separate and distinct from that of its members. Like a real person, a corporation can enter into contracts, sue and be sued, and must pay tax separately from its owners. As an entity in its own right, it is liable for its own debts and obligations. Providing it complies with applicable laws, the corporation’s owners (shareholders) typically enjoy limited liability and are legally shielded from the corporation’s liabilities and debts.This section is based on Kenneth Holland’s May 2005 review of the book Corporate Governance: Law, Theory and Policy. The existence of a corporation is not dependent upon whom the owners or investors are at any one time. Once formed, a corporation continues to exist as a separate entity, even when shareholders die or sell their shares. A corporation continues to exist until the shareholders decide to dissolve it or merge it with another business. Corporations are subject to the laws of the state of incorporation and to the laws of any other state in which the corporation conducts business. Corporations may therefore be subject to the laws of more than one state. All states have corporation statutes that set forth the ground rules as to how corporations are formed and maintained. A key question that has helped shape today’s patchwork of corporate laws asks, “What is or should be the role of law in regulating what is essentially a private relationship?” Legal scholars typically adopt either a “contract-based” or “public interest” approach to this question. Free-market advocates tend to see the corporation as a contract, a voluntary economic relationship between shareholders and management, and see little need for government regulation other than the necessity of providing a judicial forum for civil suits alleging breach of contract. Public interest advocates, on the other hand, concerned by the growing impact of large corporations on society, tend to have little faith in market solutions and argue that government must force firms to behave in a manner that advances the public interest. Proponents of this point of view focus on how corporate behavior affects multiple stakeholders, including customers, employees, creditors, the local community, and protectors of the environment. The stock market crash of 1929 brought the federal government into the regulation of corporate governance for the first time. President Franklin Roosevelt believed that public confidence in the equity market needed to be restored. Fearing that individual investors would shy away from stocks and, by doing so, reduce the pool of capital available to fuel economic growth in the private sector, Congress enacted the Securities Act in 1933 and the Securities Exchange Act in the following year, which established the Securities and Exchange Commission (SEC). This landmark legislation shifted the balance between the roles of federal and state law in governing corporate behavior in America and sparked the growth of federal regulation of corporations at the expense of the states and, for the first time, exposed corporate officers to federal criminal penalties. More recently, in 2002, as a result of the revelations of accounting and financial misconduct in the Enron and WorldCom scandals, Congress enacted the Accounting Reform and Investor Protection Act, better known as the Sarbanes-Oxley Act. Most of the major state court decisions involving corporate governance are issued by the Delaware Chancery Court, due to the large number of major corporations incorporated in Delaware. In the 21st century, federal securities law, however, has supplanted state law as the most visible means of regulating corporations. The federalization of corporate governance law is perhaps best illustrated by the provision of the Sarbanes-Oxley law that bans corporate loans to directors and executive officers, a matter long dominated by state law. The Securities and Exchange Commission The SEC—created to protect investors; maintain fair, orderly, and efficient markets; and facilitate capital formation—is charged with implementing and enforcing the legal framework that governs security transactions in the United States. This framework is based on a simple and straightforward concept: All investors, whether large institutions or private individuals, should have access to certain basic facts about an investment prior to buying it, and so long as they hold it. To achieve this, the SEC requires public companies to disclose meaningful financial and other information to the public. This promotes efficiency and transparency in the capital market, which, in turn, stimulates capital formation. To ensure efficiency and transparency, the SEC monitors the key participants in the securities trade, including securities exchanges, securities brokers and dealers, investment advisers, and mutual funds.http://www.sec.gov/about/whatwedo.shtml Crucial to the SEC’s effectiveness in each of these areas is its enforcement authority. Each year the SEC brings hundreds of civil enforcement actions against individuals and companies for violation of the securities laws. Typical infractions include insider trading, accounting fraud, and providing false or misleading information about securities and the companies that issue them. Although it is the primary overseer and regulator of the U.S. securities markets, the SEC works closely with many other institutions, including Congress, other federal departments and agencies, self-regulatory organizations (e.g., the stock exchanges), state securities regulators, and various private sector organizations. Specific responsibilities of the SEC include (a) interpret federal securities laws; (b) issue new rules and amend existing rules; (c) oversee the inspection of securities firms, brokers, investment advisers, and ratings agencies; (d) oversee private regulatory organizations in the securities, accounting, and auditing fields; and (e) coordinate U.S. securities regulation with federal, state, and foreign authorities. The Exchanges The NYSE Euronext and NASDAQ account for the trading of a major portion of equities in North America and the world. While similar in mission, they are different in the ways they operate and in the types of equities that are traded on them.http://www.investopedia.com The NYSE Euronext and its predecessor, the NYSE, trace their origins to 1792. Their listing standards are among the highest of any market in the world. Meeting these requirements signifies that a company has achieved leadership in its industry in terms of business and investor interest and acceptance. The Corporate Governance Listing Standards set out in Section 303A of the NYSE Listed Company Manual were initially approved by the SEC on November 4, 2003, and amended in the following year. Today, NYSE Euronext’s nearly 4,000 listed companies represent almost \$30 trillion in total global market capitalization. The NASDAQ, the other major U.S. stock exchange, is the largest U.S. electronic stock market. With approximately 3,200 companies, it lists more companies and, on average, trades more shares per day than any other U.S. market. It is home to companies that are leaders across all areas of business, including technology, retail, communications, financial services, transportation, media, and biotechnology. The NASDAQ is typically known as a high-tech market, attracting many of the firms dealing with the Internet or electronics. Accordingly, the stocks on this exchange are considered to be more volatile and growth-oriented. While all trades on the NYSE occur in a physical place, on the trading floor of the NYSE, the NASDAQ is defined by a telecommunications network. The fundamental difference between the NYSE and NASDAQ, therefore, is in the way securities on the exchanges are transacted between buyers and sellers. The NASDAQ is a dealer’s market in which market participants buy and sell from a dealer (the market maker). The NYSE is an auction market, in which individuals typically buy from and sell to one another based on an auction price. Prior to March 8, 2006, a major difference between these two exchanges was their type of ownership: the NASDAQ exchange was listed as a publicly traded corporation, while the NYSE was private. In March of 2006, however, the NYSE went public after being a not-for-profit exchange for nearly 214 years. In the following year, NYSE Euronext—a holding company—was created as part of the merger of the NYSE Group Inc. and Euronext N.V. Now, NYSE Euronext operates the world’s largest and most liquid exchange group and offers the most diverse array of financial products and services (see NYSE Web site at http://www.nyse.com). It brings together six cash equities exchanges in five countries and six derivatives exchanges and is a world leader for listings, trading in cash equities, equity and interest rate derivatives, bonds, and the distribution of market data. As publicly traded companies, the NASDAQ and the NYSE must follow the standard filing requirements set out by the SEC and maintain a body of rules to regulate their member organizations and their associated persons. Such rules are designed to prevent fraudulent and manipulative acts and practices, promote just and equitable principles of trade, and provide a means by which they can take appropriate disciplinary actions against their membership when rule violations occur. The Gatekeepers: Auditors, Security Analysts, Bankers, and Credit Rating Agencies The integrity of our financial markets greatly depends on the role played by a number of “gatekeepers”—external auditors, analysts, and credit rating agencies—in detecting and exposing the kinds of questionable financial and accounting decisions that led to the collapse of Enron, WorldCom, and other “misreporting” or accounting frauds.This section draws on Edwards (2003). A key question is whether we can (or should) rely on these gatekeepers to perform their roles diligently. It can be argued that we can and should because their business success depends on their credibility and reputation with the ultimate users of their information—investors and creditors—and if they provide fraudulent or reckless opinions, they are subject to private damage suits. The problem with this view is that the interests of gatekeepers are often more closely aligned with those of corporate managers than with investors and shareholders. Gatekeepers, after all, are typically hired and paid (and fired) by the very firms that they evaluate or rate, and not by creditors or investors. Auditors are hired and paid by the firms they audit; credit rating agencies are typically retained and paid by the firms they rate; lawyers are paid by the firms that retain them; and, as we learned in the aftermath of the 2001 governance scandals, until recently the compensation of security analysts (who work primarily for investment banks) was closely tied to the amount of related investments banking business that their employers (the investment banks) do with the firms that their analysts evaluate.Citigroup paid \$400 million to settle government charges that it issued fraudulent research reports; and Merrill Lynch agreed to pay \$200 million for issuing fraudulent research in a settlement with securities regulators and also agreed that, in the future, its securities analysts would no longer be paid on the basis of the firm’s related investment-banking work. A contrasting view, therefore, holds that most gatekeepers are inherently conflicted and cannot be expected to act in the interests of investors and shareholders. Advocates of this perspective also argue that gatekeeper conflict of interest worsened during the 1990s because of the increased cross-selling of consulting services by auditors and credit rating agencies and by the cross-selling of investment banking services.Coffee (2002, 2003a, 2003b). Both issues are addressed by recent regulatory reforms; new rules address the restoration of the “Chinese Wall” between investment banks and security analysts, and mandate the separation of audit and consulting services for accounting firms.
textbooks/biz/Management/Corporate_Governance_(de_Kluyver)/01%3A_Linking_Corporations_and_Society/1.01%3A_The_U.S._Corporate_Governance_System.txt
In Germany, labor unions traditionally have had seats on corporate boards. At Japanese firms, loyal managers often finish their careers with a stint in the boardroom. Founding families hold sway on Indian corporate boards. And in China, boards are populated by Communist Party officials.Bradley, Schipani, Sundaram, and Walsh (1999). The German and Japanese corporate governance systems are very different from that in the United States. Knowing how they function is important. The German and Japanese economies play host to many of the world’s largest corporations. Moreover, their governance systems have had substantial spillover effects beyond their respective borders. Many countries in Europe, such as Austria, Belgium, Hungary, and, to a lesser extent, France and Switzerland, and much of northern Europe, evolved their governance systems along Germanic, rather than Anglo-American, lines. Moreover, the newly liberalizing economies of Eastern Europe appear to be patterning their governance systems along Germanic lines as well. The spillover effects of the Japanese governance system are increasingly evident in Asia where Japanese firms have been the largest direct foreign investors during the past decade. In contrast, variants of the Anglo-American system of governance are only found in a few countries, such as the United Kingdom, Canada, Australia, and New Zealand. The German Corporate Governance System The goals of German corporations are clearly defined in German corporation law. Originally enacted in 1937, and subsequently modified in 1965, German corporate law defines the role of the board to govern the corporation for the “good of the enterprise, its multiple stakeholders, and society at large.” Until the 1965 revision, the German corporate law said nothing specific about shareholders. The law also provides that if a company endangers public welfare and does not take corrective action, it can be dissolved by an act of state. Despite the relatively recent recognition that shareholders represent an important constituency, corporate law in Germany makes it abundantly clear that shareholders are only one of many stakeholder groups on whose behalf managers must run the firm. Large public German companies—those with more than 500 employees—are required to have a two-tier board structure: a supervisory board (Aufsichtsrat) that performs the strategic oversight role and a management board (Vorstand) that performs an operational and day-to-day management oversight role. There are no overlaps in membership between the two boards. The supervisory board appoints and oversees the management board. In companies with more than 2,000 employees, half of the supervisory board must consist of employees, the other half of shareholder representatives. The chairperson of the supervisory board is, however, typically a shareholder representative and has the tie-breaking vote. The management board consists almost entirely of the senior executives of the company. Thus, management board members have considerable firm- and industry-specific knowledge. The essence of this two-tiered board structure is the explicit representation of stakeholder interests other than of shareholders: No major strategic decisions can be made without the cooperation of employees and their representatives. The ownership structure of German firms also differs quite substantially from that observed in Anglo-American firms. Intercorporate and bank shareholdings are common, and only a relatively small proportion of the equity is owned by private citizens. Ownership typically is more concentrated: Almost one quarter of the publicly held German firms has a single majority shareholder. Also, a substantial portion of equity is “bearer” rather than “registered” stock. Such equity is typically on deposit with the company’s hausbank, which handles matters such as dividend payments and record keeping. German law allows banks to vote such equity on deposit by proxy, unless depositors explicitly instruct banks to do otherwise. Because of inertia on the part of many investors, banks, in reality, control a substantial portion of the equity in German companies. The ownership structure, the voting restrictions, and the control of the banks also imply that takeovers are less common in Germany compared to the United States as evidenced by the relatively small number of mergers and acquisitions. When corporate combinations do take place, they usually are friendly, arranged deals. Until the recent rise of private equity, hostile takeovers and leveraged buyouts were virtually nonexistent; even today antitakeover provisions, poison pills, and golden parachutes are rare. The Japanese Corporate Governance System The Japanese economy consists of multiple networks of firms with stable, reciprocal, minority equity interests in each other, known as keiretsus. Although the firms in a keiretsu are typically independent companies, they trade with each other and cooperate on matters, such as governance. Keiretsus can be vertical or horizontal. Vertical keiretsus are networks of firms along the supply chain; horizontal keiretsus are networks of businesses in similar product markets. Horizontal keiretsus typically include a large main bank that does business with all of the member firms and holds minority equity positions in each. Like Anglo-American companies, Japanese firms have single-tier boards. However, in Japan a substantial majority of board members are company insiders, usually current or former senior executives. Thus, unlike the United States, outside directorships are still rare, although they are becoming more prevalent. The one exception to outside directorships is the main banks. Their representatives usually sit on the boards of the keiretsu firms with whom they do business. In contrast to the German governance system where employees and sometimes suppliers tend to have explicit board representation, the interests of stakeholders other than management or the banks are not directly represented on Japanese boards. Share ownership in Japan is concentrated and stable. Although Japanese banks are not allowed to hold more than 5% of a single firm’s stock, a small group of four or five banks typically controls about 20% to 25% of a firm’s equity. As in Germany, the market for corporate control in Japan is relatively inactive compared to that in the United States. Bradley, Schipani, Sundaram, and Walsh (1999) found that disclosure quality, although considered superior to that of German companies, is poor in comparison to that of U.S. firms. Although there are rules against insider trading and monopolistic practices, the application of these laws is, at best, uneven and inconsistent.Bradley, Schipani, Sundaram, and Walsh (1999). As Bradley et al. (1999) observe, although there are significant differences, there also is a surprising degree of similarity between the German and Japanese governance systems. Similarities include the relatively small reliance on external capital markets; the minor role of individual share ownership; significant institutional and intercorporate ownership, which is often concentrated; relatively stable and permanent capital providers; boards comprising functional specialists and insiders with knowledge of the firm and the industry; the relatively important role of banks as financiers, advisers, managers, and monitors of top management; the increased role of leverage with emphasis on bank financing; informal as opposed to formal workouts in financial distress; the emphasis on salary and bonuses rather than equity-based executive compensation; the relatively poor disclosure from the standpoint of outside investors; and conservatism in accounting policies. Moreover, both the German and Japanese governance systems emphasize the protection of employee and creditor interests, at least as much as the interests of shareholders. The market for corporate control as a credible disciplining device is largely absent in both countries, as is the need for takeover defenses because the governance system itself, in reality, is a poison pill.Bradley, Schipani, Sundaram, and Walsh (1999). As recent history has shown, however, the stakeholder orientation of German and Japanese corporate governance is not without costs. The central role played by both employees (Germany) and suppliers (Japan) in corporate governance can lead to inflexibility in sourcing strategies, labor markets, and corporate restructurings. It is often harder, therefore, for firms in Germany and Japan to move quickly to meet competitive challenges from the global product-market arena. The employees’ role in governance also affects labor costs, while a suppliers’ role in governance, as in the case of the vertical keiretsu in Japan, can lead to potential problems of implicit or explicit vertical restraints to competition, or what we would refer to as antitrust problems. Finally, the equity ownership structures in both systems make takeovers far more difficult, which arguably is an important source of managerial discipline in the Anglo-American system.
textbooks/biz/Management/Corporate_Governance_(de_Kluyver)/01%3A_Linking_Corporations_and_Society/1.02%3A_Corporate_Governance_Elsewhere_in_the_World.txt
Entrepreneurial, Managerial, and Fiduciary Capitalism In the first part of the twentieth century, large U.S. corporations were controlled by a small number of wealthy entrepreneurs—Morgan, Rockefeller, Carnegie, Ford, and Du Pont, to name a few. These “captains of industry” not only owned the majority of the stock in companies, such as Standard Oil and U.S. Steel, but they also exercised their rights to run these companies. By the 1930s, however, the ownership of U.S. corporations had become much more widespread. Capitalism in the United States had made a transition from entrepreneurial capitalism, the model in which ownership and control had been synonymous, to managerial capitalism, a model in which ownership and control were effectively separated—that is, in which effective control of the corporation was no longer exercised by the legal owners of equity (the shareholders) but by hired, professional managers. With the rise of institutional investing in the 1970s, primarily through private and public pension funds, the responsibility of ownership became once again concentrated in the hands of a relatively small number of institutional investors who act as fiduciaries on behalf of individuals. This large-scale institutionalization of equity brought further changes to the corporate governance landscape. Because of their size, institutional investors effectively own a major fraction of many large companies. And because this can restrict their liquidity, they de facto may have to rely on active monitoring (usually by other, smaller activist investors) than trading. This model of corporate governance, in which monitoring has become as or more important than trading, is sometimes referred to as fiduciary capitalism.This section is based on the essay by Hawley and Williams (2001). The 1980s: Takeovers and Restructuring As the ownership of American companies changed, so did the board-management relationship. For the greater part of the 20th century, when managerial capitalism prevailed, executives had a relatively free rein in interpreting their responsibilities toward the various corporate stakeholders and, as long as the corporation made money and its operations were conducted within the confines of the law, they enjoyed great autonomy. Boards of directors, mostly selected and controlled by management, intervened only infrequently, if at all. Indeed, for the first half of the last century, corporate executives of many publicly held companies managed with little or no outside control. In the 1970s and 1980s, however, serious problems began to surface, such as exorbitant executive payouts, disappointing corporate earnings, and ill-considered acquisitions that amounted to little more than empire building and depressed shareholder value. Led by a small number of wealthy, activist shareholders seeking to take advantage of the opportunity to capture underutilized assets, takeovers surged in popularity. Terms, such as leveraged buyout, dawn raids, poison pills, and junk bonds, became household words, and individual corporate raiders, including Carl Icahn, Irwin Jacobs, and T. Boone Pickens, became well known. The resulting takeover boom exposed underperforming companies and demonstrated the power of unlocking shareholder value. The initial response of U.S. corporate managers was to fight takeovers with legal maneuvers and to attempt to enlist political and popular support against corporate raiders. These efforts met with some legislative, regulatory, and judicial success and made hostile takeovers far more costly. As a result, capital became scarce and junk-bond-financed, highly leveraged, hostile takeovers faded from the stage.Thornton (2002, January 14). Hostile takeovers made a dramatic comeback after the 2001 to 2002 economic recession. In 2001, the value of hostile takeovers climbed to \$94 billion, more than twice the value in 2000 and almost \$15 billion more than in 1988, the previous peak year. Of lasting importance from this era was the emergence of institutional investors who knew the value of ownership rights, had fiduciary responsibilities to use them, and were big enough to make a difference.Romano (1994). And with the implicit assent of institutional investors, boards substantially increased the use of stock option plans that allowed managers to share in the value created by restructuring their own companies. Shareholder value, therefore, became an ally rather than a threat.Holmstrom and Kaplan (2003).
textbooks/biz/Management/Corporate_Governance_(de_Kluyver)/01%3A_Linking_Corporations_and_Society/1.03%3A_A_Brief_History.txt