Difference between revisions of "Decision Management"

From SEBoK
Jump to navigation Jump to search
Line 50: Line 50:
 
[[File:decision_tree_example.png|thumb|400px|center|'''Figure 2. Decision Tree Example.''' (SEBoK Original)]]
 
[[File:decision_tree_example.png|thumb|400px|center|'''Figure 2. Decision Tree Example.''' (SEBoK Original)]]
  
Influence diagrams focus attention on the issues and relationships between events. They are generalizations of Bayesian networks; whereby, maximum expected utility criteria can be modeled. A good reference is presented by Detwarasiti and Shachter (2005, 207-228) for using influence diagrams in team decision analysis. Utility is a measure of relative satisfaction that takes into account the decision-maker’s preferred function, which may be nonlinear. Expected utility theory deals with the analysis of choices with multidimensional outcomes. When dealing with monetary value, the analyst can determine the decision-maker’s utility for money and select the alternative course of action that yields the highest expected utility rather than the highest expected monetary value.  ''Decision ith Multiple Objectives: Preferences and Value- Trade-Offs' is classic reference on applying multiple objective methods, utility functions, and allied techniques (Kenney and Raiffa 1976). References with applied examples of decision tree analysis and utility functions include ''[[Managerial Decision Analysis]]'' (Samson 1988) and ''Introduction to Decision Analysis'' (Skinner 1999).
+
Influence diagrams focus attention on the issues and relationships between events. They are generalizations of Bayesian networks; whereby, maximum expected utility criteria can be modeled. A good reference is presented by Detwarasiti and Shachter (2005, 207-228) for using influence diagrams in team decision analysis. Utility is a measure of relative satisfaction that takes into account the decision-maker’s preferred function, which may be nonlinear. Expected utility theory deals with the analysis of choices with multidimensional outcomes. When dealing with monetary value, the analyst can determine the decision-maker’s utility for money and select the alternative course of action that yields the highest expected utility rather than the highest expected monetary value.  ''Decision with Multiple Objectives: Preferences and Value- Trade-Offs'' is a classic reference on applying multiple objective methods, utility functions, and allied techniques (Kenney and Raiffa 1976). References with applied examples of decision tree analysis and utility functions include ''[[Managerial Decision Analysis]]'' (Samson 1988) and ''Introduction to Decision Analysis'' (Skinner 1999).
  
 
Systems engineers often have to consider many different criteria when making choices about system tradeoffs. Taken individually, these criteria could lead to very different choices. A weighted objectives analysis is a rational basis for incorporating the multiple criteria into the decision. In this analysis, each criterion is weighted depending on its importance relative to the others. Table 1 below shows an example deciding between two alternatives using their criteria weights, ratings, weighted ratings, and weighted totals to decide between the alternatives.
 
Systems engineers often have to consider many different criteria when making choices about system tradeoffs. Taken individually, these criteria could lead to very different choices. A weighted objectives analysis is a rational basis for incorporating the multiple criteria into the decision. In this analysis, each criterion is weighted depending on its importance relative to the others. Table 1 below shows an example deciding between two alternatives using their criteria weights, ratings, weighted ratings, and weighted totals to decide between the alternatives.

Revision as of 15:32, 28 November 2012

The purpose of decision management is to analyze possible decisions using a formal evaluation process that evaluates identified alternatives against established decision criteria. It involves establishing guidelines to determine which issues should be subjected to a formal evaluation process, and then applying formal evaluation processes to these issues. The purpose of decision management is to analyze possible decisions using a formal evaluation process that evaluates identified alternatives against established decision criteria. It involves establishing guidelines to determine which issues should be subjected to a formal evaluation process, and then applying formal evaluation processes to these issues.

The technical discipline of decision analysis identifies the best option among a set of alternatives under uncertainty. Many analysis methods are used in the multidimensional tradeoffs of systems engineering, under varying degrees of uncertainty. These start from non-probabilistic decision rules that ignore the likelihood of chance outcomes, move to expected value rules, and end in more general utility approaches.

Process Overview

The best practices of decision management for evaluating alternatives are described in the following sections and grouped by specific practices (SEI 2007).

  • Establish Guidelines for Decision Analysis - Guidelines should be established to determine which issues are subject to a formal evaluation process, as not every decision is significant enough to warrant formality. Whether a decision is significant or not is dependent on the project and circumstances and is determined by the established guidelines.
  • Establish Evaluation Criteria - Criteria for evaluating alternatives and the relative ranking of these criteria are established. The evaluation criteria provide the basis for evaluating alternative solutions. The criteria are ranked so that the highest ranked criteria exert the most influence. There are many contexts in which a formal evaluation process can be used by systems engineers, so the criteria may have already been defined as part of another process.
  • Identify Alternative Solutions- Identify alternative solutions to address program issues. A wider range of alternatives can surface by soliciting more stakeholders with diverse skills and backgrounds to identify and address assumptions, constraints, and biases. Brainstorming sessions may stimulate innovative alternatives through rapid interaction and feedback. Sufficient candidate solutions may not be furnished for analysis. As the analysis proceeds, other alternatives should be added to the list of potential candidates. The generation and consideration of multiple alternatives early in a decision process increases the likelihood that an acceptable decision will be made and that the consequences of the decision will be understood.
  • Select Evaluation Methods - Methods for evaluating alternative solutions against established criteria can range from simulations to the use of probabilistic models and decision theory. These methods need to be carefully selected. The level of detail of a method should be commensurate with cost, schedule, performance, and associated risk. Typical analysis evaluation methods include the following:
    • modeling and simulation
    • analysis studies on business opportunities, engineering, manufacturing, cost, etc
    • surveys and user reviews
    • extrapolations based on field experience and prototypes
    • testing
    • judgment provided by an expert or group of experts (e.g., Delphi Method)
  • Evaluate Alternatives- Evaluate alternative solutions using the established criteria and methods Evaluating alternative solutions includes such activities as analysis, discussion, and review. Iterative cycles of analysis are sometimes necessary. Supporting analyses, experimentation, prototyping, piloting, or simulations may be needed to substantiate scoring and conclusions.
  • Select Solutions- Select solutions from the alternatives based on the evaluation criteria. Selecting solutions involves weighing the results from the evaluation of alternatives and the corresponding risks.

Decision Judgment Methods

Common alternative judgment methods that the practitioner should be aware of include:

  • Emotion-Based Judgment - Once a decision is made public, the decision-makers will vigorously defend their choice, even in the face of contrary evidence, because it is easy to become emotionally tied to the decision. Another phenomenon is that people often need “permission” to support an action or idea, as explained by Cialdini (2006).inherent human trait also suggests why teams often resist new ideas.
  • Intuition-Based Judgment - Intuition plays a key role in leading development teams to creative solutions. Gladwell (2005) makes the argument that humans intuitively see the powerful benefits or fatal flaws inherent in a proposed solution. Intuition can be an excellent guide when based on relevant past experience, but it may also be blinding to undiscovered concepts. Ideas generated based on intuition should be considered seriously, but should be treated as an output of a brainstorming session and evaluated using one of the next three approaches. Also, see Skunk Works (Rich and Janos 1996) for more on intuitive decisions.
  • Expert-Based Judgment - For certain problems, especially ones involving technical expertise outside SE, utilizing experts is a cost effective approach. The decision-making challenge is to establish perceptive criteria for selecting the right experts.
  • Fact-Based Judgment - This is the most common method and is discussed in more detail below.
  • Probability-Based Judgment - This method is used to deal with uncertainty, and this topic is also elaborated below.

Fact-Based Decision Making

Informed decision-making requires a clear statement of objectives, a clear understanding of the value of the outcome, a gathering of relevant information, an appropriate assessment of alternatives, and a logical process to make a selection. Regardless of the method used, the starting point for decision-making is to identify an appropriate (usually small) team to frame and challenge the decision statement. The decision statement should be concise, but the decision maker and the team should iterate until they have considered all positive and negative consequences of the way they have expressed their objective.

Once the decision-maker and team accept the decision statement, the next step is to define the decision criteria. As shown in Figure 1, the criteria fall into two categories: musts and wants. Any candidate solution that does not satisfy a must should be rejected, no matter how attractive all other aspects of the solution are. If a candidate solution appears promising but fails the must requirements, there is nothing wrong in challenging the requirements, as long as this is done with open awareness to avoid bias. This is a judgment process and the resulting matrix is a decision support guide, not a mandatory theoretical constraint. A sample flowchart to assist in fact based judgment from Visualizing Project Management is below in Figure 1 (Forsberg, Mooz, and Cotterman 2005, 154-155).

Figure 1. Decision Selection Flowchart (Forsberg, Mooz, and Cotterman 2005). Reprinted with permission of John Wiley & Sons. All other rights are reserved by the copyright owner.

The next step is to define the desirable characteristics of the solution and develop a relative weighting. If no weighting is used, it implies all criteria are of equal importance. One fatal flaw is if the team creates too many criteria (15 or 20 or more), as this tends to obscure important differences in the candidate solutions. When the relative comparison is completed, scores within five percent of each other are essentially equal. An opportunity and risk assessment should be performed on the best candidates, as well as a sensitivity analysis on the scores and weights to ensure that the robustness (or fragility) of the decision is known.

There are a number of approaches, starting with Pugh’s method in which each candidate solution is compared to a reference standard, and is rated as equal, better, or inferior for each of the criteria (Pugh 1981). An alternative approach is the Kepner Tragoe decision matrix, which is described by Forsberg, Mooz, and Cotterman (2005, 154-155).

In the preceding example all criteria are compared to the highest-value criterion. Another approach is to create a full matrix of pair-wise comparisons of all criteria against each other, and from that, using the analytical hierarchy process (AHP), the relative importance is calculated. The process is still judgmental, and the results are no more accurate, but several computerized decision support tools have made effective use of this process (Saaty 2008).

Probability-Based Decision Analysis

Probability-based decisions are made when there is uncertainty. Decision management techniques and tools for decisions based on uncertainty include probability theory, utility functions, decision tree analysis, modeling, and simulation. A classic mathematically oriented reference for understanding decision trees and probability analysis is explained by Raiffa (1997). Another classic introduction is presented by Schlaiffer (1969) with more of an applied focus. The aspect of modeling and simulation is covered in the popular textbook Simulation Modeling and Analysis (Law 2007), which also has good coverage of Monte Carlo analysis. Some of these more commonly used and fundamental methods are overviewed below.

Decision trees and influence diagrams are visual analytical decision support tools where the expected values (or expected utility) of competing alternatives are calculated. A decision tree uses a tree-like graph or model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. Influence diagrams are used for decision models as alternate, more compact, graphical representations of decision trees.

Figure 2 below demonstrates a simplified make vs. buy decision analysis tree and the associated calculations. Suppose making a product costs $200K more than buying an alternative off the shelf, reflected as a difference in the net payoffs in the figure. The custom development is also expected to be a better product with a corresponding larger probability of high sales at 80% vs. 50% for the bought alternative. With these assumptions, the monetary expected value of the make alternative is (.8*2.0M) + (.2*0.5M) = 1.7M and the buy alternative is (.5*2.2M) + (.5*0.7M) = 1.45M.

Figure 2. Decision Tree Example. (SEBoK Original)

Influence diagrams focus attention on the issues and relationships between events. They are generalizations of Bayesian networks; whereby, maximum expected utility criteria can be modeled. A good reference is presented by Detwarasiti and Shachter (2005, 207-228) for using influence diagrams in team decision analysis. Utility is a measure of relative satisfaction that takes into account the decision-maker’s preferred function, which may be nonlinear. Expected utility theory deals with the analysis of choices with multidimensional outcomes. When dealing with monetary value, the analyst can determine the decision-maker’s utility for money and select the alternative course of action that yields the highest expected utility rather than the highest expected monetary value. Decision with Multiple Objectives: Preferences and Value- Trade-Offs is a classic reference on applying multiple objective methods, utility functions, and allied techniques (Kenney and Raiffa 1976). References with applied examples of decision tree analysis and utility functions include Managerial Decision Analysis (Samson 1988) and Introduction to Decision Analysis (Skinner 1999).

Systems engineers often have to consider many different criteria when making choices about system tradeoffs. Taken individually, these criteria could lead to very different choices. A weighted objectives analysis is a rational basis for incorporating the multiple criteria into the decision. In this analysis, each criterion is weighted depending on its importance relative to the others. Table 1 below shows an example deciding between two alternatives using their criteria weights, ratings, weighted ratings, and weighted totals to decide between the alternatives.

Table 1. Weighted Criteria Example. (SEBoK Original)
Alternative A Alternative B
Criteria Weight Rating Weight * Rating Rating Weight * Rating
Better 0.5 4 2.0 10 5.0
Faster 0.3 8 2.4 5 21.5
Cheaper 0.2 5 1.0 3 0.6
Total Weighted Score 5.4 7.1

There are numerous other methods used in decision analysis. One of the simplest is sensitivity analysis, which looks at the relationships between the outcomes and their probabilities to find how sensitive a decision point is to changes in input values. Value of information methods concentrates its effort on data analysis and modeling to improve the optimum expected value. Multi -aAttribute Utility Analysis (MAUA) is a method that develops equivalencies between dissimilar units of measure. Systems Engineering Management (Blanchard 2004b) shows a variety of these decision analysis methods in many technical decision scenarios. A comprehensive reference demonstrating decision analysis methods for software-intensive systems is "Software Risk Management: Principles and Practices" (Boehm 1981, 32-41). It is a major for information pertaining to multiple goal decision analysis, dealing with uncertainties, risks, and the value of information.

Facets of a decision situation which cannot be explained by a quantitative model should be reserved for the intuition and judgment that are applied by the decision maker. Sometimes outside parties are also called upon. One method to canvas experts, known as the Delphi Technique, is a method of group decision-making and forecasting that involves successively collecting and analyzing the judgments of experts. A variant called the Wideband Delphi Technique is described by Boehm (1981, 32-41) for improving upon the standard Delphi with more rigorous iterations of statistical analysis and feedback forms.

Making decisions is a key process practiced by systems engineers, project managers, and all team members. Sound decisions are based on good judgment and experience. There are concepts, methods, processes, and tools that can assist in the process of decision-making, especially in making comparisons of decision alternatives. These tools can also assist in building team consensus, in selecting and supporting the decision made, and in defending it to others. General tools, such as spreadsheets and simulation packages, can be used with these methods. There are also tools targeted specifically for aspects of decision analysis such as decision trees, evaluation of probabilities, Bayesian influence networks, and others. The International Council on Systems Engineering (INCOSE) tools database (INCOSE 2010, 1) has an extensive list of analysis tools.

Linkages to Other Systems Engineering Management Topics

Decision management is used in many other process areas due to numerous contexts for a formal evaluation process, both in technical and management, and is closely coupled with other management areas. Risk Management in particular uses decision analysis methods for risk evaluation, mitigation decisions, and a formal evaluation process to address medium or high risks. The measurement process describes how to derive quantitative indicators as input to decisions. Project assessment and control uses decision results. Refer to the planning process area for more information about incorporating decision results into project plans.

Practical Considerations

Key pitfalls and good practices related to decision analysis are described below.

Pitfalls

Some of the key pitfalls are below in Table 2.

Table 2. Decision Analysis Pitfalls. (SEBoK Original)
Name Description
False Confidence
  • False confidence in the accuracy of values used in decisions.
No External Validation
  • Not engaging experts and holding peer reviews. The decision-maker should engage experts to validate decision values.
Errors and False Assumptions
  • Prime sources of errors in risky decision-making include false assumptions, not having an accurate estimation of the probabilities, relying on expectations, difficulties in measuring the utility function, and forecast errors.
Impractical Application
  • The analytical hierarchy process may not handle real-life situations taking into account the theoretical difficulties in using eigen vectors.

Good Practices

Some good practices are below in Table 3.

Table 3. Decision Analysis Good Practices. (SEBoK Original)
Name Description
Progressive Decision Modeling
  • Use progressive model building. Detail and sophistication can be added as confidence in the model increases.
Necessary Measurements
  • Measurements need to be tied to the information needs of the decision- makers.
Define Selection Criteria
  • Define selection criteria and process (and success criteria) before identifying trade alternatives.

Additional good practices can be found in ISO/IEC/IEEE (2009, Clause 6.3) and INCOSE (2010, Section 5.3.1.5). Parnell, Driscoll, and Henderson (2008) provide a thorough overview.

References

Works Cited

Blanchard, B.S. 2004. Systems Engineering Management. 3rd ed. New York, NY,USA: John Wiley & Sons.

Boehm, B. 1981. "Software risk management: Principles and practices." IEEE Software 8 (1) (January 1991): 32-41.

Cialdini, R.B. 2006. Influence: The Psychology of Persuasion. New York, NY, USA: Collins Business Essentials.

Detwarasiti, A., and R. D. Shachter. 2005. "Influence diagrams for team decision analysis." Decision Analysis 2 (4): 207-28.

Gladwell, M. 2005. Blink: the Power of Thinking without Thinking. Boston, MA, USA: Little, Brown & Co.

Kenney, R.L., and H. Raiffa. 1976. Decision with multiple objectives: Preferences and value- trade-offs. New York, NY: John Wiley & Sons.

INCOSE. 2011. Systems Engineering Handbook: A Guide for System Life Cycle Processes and Activities. Version 3.2.1. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2.1.

Law, A. 2007. Simulation Modeling and Analysis. 4th ed. New York, NY, USA: McGraw Hill.

Pugh, S. 1981. "Concept selection: a method that works". In Hubka, V. (ed.), Review of design methodology. Proceedings of the International Conference on Engineering Design, Rome, Italy, March 1981.

Rich, B. and L. Janos. 1996. Skunk Works. Boston, MA, USA: Little, Brown & Company.

Saaty, T.L. 2008. Decision Making for Leaders: The Analytic Hierarchy Process for Decisions in a Complex World. Pittsburgh, PA, USA:: RWS Publications. ISBN 0-9620317-8-X.

Raiffa, H. 1997. Decision Analysis: Introductory Lectures on Choices under Uncertainty. New York, NY, USA: McGraw-Hill.

Samson, D. 1988. Managerial Decision Analysis. New York, NY, USA: Richard D. Irwin, Inc.

Schlaiffer, R. 1969. Analysis of Decisions under Uncertainty. New York, NY, USA: McGraw-Hill book Company.

Skinner, D. 1999. Introduction to Decision Analysis. 2nd ed. Sugar Land, TX, USA: Probabilistic Publishing.

Wikipedia contributors. "Decision making software." Wikipedia, The Free Encyclopedia. Accessed September 13, 2011. Available at: http://en.wikipedia.org/w/index.php?title=Decision_making_software&oldid=448914757.

Primary References

Forsberg, K., H. Mooz, and H. Cotterman. 2005. Visualizing Project Management. 3rd ed. Hoboken, NJ, USA: John Wiley and Sons. p. 154-155.

Law, A. 2007. Simulation Modeling and Analysis. 4th ed. New York, NY, USA: McGraw Hill.

Raiffa, H. 1997. Decision Analysis: Introductory Lectures on Choices under Uncertainty. New York, NY, USA: McGraw-Hill.

Saaty, T.L. 2008. Decision Making for Leaders: The Analytic Hierarchy Process for Decisions in a Complex World. Pittsburgh, PA, USA: RWS Publications. ISBN 0-9620317-8-X.

Samson, D. 1988. Managerial Decision Analysis. New York, NY, USA: Richard D. Irwin, Inc.

Schlaiffer, R. 1969. Analysis of Decisions under Uncertainty. New York, NY, USA: McGraw-Hill.

Additional References

Blanchard, B. S. 2004. Systems Engineering Management. 3rd ed. New York, NY, USA: John Wiley & Sons.

Boehm, B. 1981. "Software Risk Management: Principles and Practices." IEEE Software 8(1) (January 1991): 32-41.

Detwarasiti, A. and R.D. Shachter. 2005. "Influence Diagrams for Team Decision Analysis." Decision Analysis 2(4): 207-28.

INCOSE. 2011. Systems Engineering Handbook: A Guide for System Life Cycle Processes and Activities. Version 3.2.1. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2.1.

Kenney, R.L., and H. Raiffa. 1976. Decision with Multiple Objectives: Preferences and Value- Trade-Offs. New York, NY, USA: John Wiley & Sons.

Kepner, Charles Higgins, and Benjamin B. Tregoe. 1965. The rational manager: a systematic approach to problem solving and decision making. New York: McGraw-Hill.

Parnell, G.S., P.J. Driscoll, and D.L. Henderson. 2010. Decision Making in Systems Engineering and Management. New York, NY, USA: John Wiley & Sons.

Rich, B. and L. Janos. 1996. Skunk Works. Boston, MA, USA: Little, Brown & Company

Skinner, D. 1999. Introduction to Decision Analysis. 2nd ed. Sugar Land, TX, USA: Probabilistic Publishing.


< Previous Article | Parent Article | Next Article >


SEBoK v. 1.9.1 released 30 September 2018

SEBoK Discussion

Please provide your comments and feedback on the SEBoK below. You will need to log in to DISQUS using an existing account (e.g. Yahoo, Google, Facebook, Twitter, etc.) or create a DISQUS account. Simply type your comment in the text field below and DISQUS will guide you through the login or registration steps. Feedback will be archived and used for future updates to the SEBoK. If you provided a comment that is no longer listed, that comment has been adjudicated. You can view adjudication for comments submitted prior to SEBoK v. 1.0 at SEBoK Review and Adjudication. Later comments are addressed and changes are summarized in the Letter from the Editor and Acknowledgements and Release History.

If you would like to provide edits on this article, recommend new content, or make comments on the SEBoK as a whole, please see the SEBoK Sandbox.

blog comments powered by Disqus