Difference between revisions of "Measurement"

From SEBoK
Jump to navigation Jump to search
(Tech and grammar edits as discussed with Bkcase)
(Reverted edits by Mhaas (talk) to last revision by Bkcase)
Tag: Rollback
Line 1: Line 1:
The purpose of {{Term|Risk Management (glossary)|risk management}} is to reduce potential {{Term|Risk (glossary)|risks}} to an acceptable level before they occur, throughout the life of the product or project. Risk management is a continuous, forward-looking process that is applied to anticipate and avert risks that may adversely impact the project, and can be considered both a {{Term|Project Management (glossary)|project management}} and a {{Term|Systems Engineering (glossary)|systems engineering}} process. A balance must be achieved on each project in terms of overall risk management ownership, implementation, and day-to-day responsibility between these two top-level processes.
+
[[Measurement (glossary)|Measurement]] and the accompanying analysis are fundamental elements of [[Systems Engineering (glossary)|systems engineering]] (SE) and technical management. SE measurement provides information relating to the products developed, services provided, and processes implemented to support effective management of the processes and to objectively evaluate product or service quality. Measurement supports realistic planning, provides insight into actual performance, and facilitates assessment of suitable actions (Roedler and Jones 2005, 1-65; Frenz et al. 2010).  
  
For the SEBoK, risk management falls under the umbrella of [[Systems Engineering Management]], though the wider body of risk literature is explored below.
+
Appropriate measures and indicators are essential inputs to tradeoff analyses to balance cost, schedule, and technical objectives. Periodic analysis of the relationships between measurement results and review of the requirements and attributes of the system provides insights that help to identify issues early, when they can be resolved with less impact. Historical data, together with project or organizational context information, forms the basis for the predictive models and methods that should be used.
  
==Risk Management Process Overview==
+
==Fundamental Concepts==
Risk is a measure of the potential inability to achieve overall program objectives within defined cost, schedule, and technical constraints. It has the following two components (DAU 2003a):
+
The discussion of measurement in this article is based on some fundamental concepts. Roedler et al. (2005, 1-65) states three key SE measurement concepts that are paraphrased here:
# the probability (or likelihood) of failing to achieve a particular outcome
 
# the consequences (or impact) of failing to achieve that outcome
 
In the domain of catastrophic risk analysis, risk has three components: (1) threat, (2) vulnerability, and (3) consequence (Willis et al. 2005).
 
  
Risk management involves defining a risk management strategy, identifying and analyzing risks, handling selected risks, and monitoring the progress in reducing risks to an acceptable level (SEI 2010; DoD 2015; DAU 2003a; DAU 2003b; PMI 2013({{Term|Opportunity (glossary)|Opportunity}} and opportunity management is briefly discussed below).
+
# '''SE measurement is a consistent but flexible process''' that is tailored to the unique information needs and characteristics of a particular project or organization and revised as information needs change. 
 +
# '''Decision makers must understand what is being measured.''' Key decision-makers must be able to connect ''what is being measured'' to ''what they need to know'' and ''what decisions they need to make ''as part of a closed-loop, feedback control process (Frenz et al. 2010)''.''  
 +
# '''Measurement must be used to be effective.'''
  
The SE risk management process includes the following activities:
+
==Measurement Process Overview==
* risk planning
+
The measurement process as presented here consists of four activities from Practical Software and Systems Measurement (PSM) (2011) and described in (ISO/IEC/IEEE 15939; McGarry et al. 2002):
* risk identification
+
# establish and sustain commitment
* risk analysis
+
# plan measurement
* risk handling
+
# perform measurement
* risk monitoring
+
# evaluate measurement
ISO/IEC/IEEE 16085 provides a detailed set of risk management activities and tasks which can be utilized in a risk management process aligned with ISO 31000:2009, Risk management — Principles and Guidelines, and ISO Guide 73:2009,
 
  
Risk management — Vocabulary. ISO 9001:2008 standard provides risk-based preventive action requirements in subclause 8.5.3.
+
This approach has been the basis for establishing a common process across the software and systems engineering communities. This measurement approach has been adopted by the Capability Maturity Model Integration (CMMI) measurement and analysis process area (SEI 2006, 10), as well as by international systems and software engineering standards (ISO/IEC/IEEE 15939; ISO/IEC/IEEE 15288, 1). The International Council on Systems Engineering (INCOSE) Measurement Working Group has also adopted this measurement approach for several of their measurement assets, such as the [[Systems Engineering Measurement Primer|INCOSE SE Measurement Primer]] (Frenz et al. 2010) and [[Technical Measurement Guide]] (Roedler and Jones 2005).  This approach has provided a consistent treatment of measurement that allows the engineering community to communicate more effectively about measurement. The process is illustrated in Figure 1 from Roedler and Jones (2005) and McGarry et al. (2002).  
  
The Risk Management Process section of the INCOSE Systems Engineering Handbook: A Guide for Systems Life Cycle Processes and Activities, 4th Edition, provides a comprehensive overview of risk management which is intended to be consistent with the Risk Management Process section of ISO 15288.  
+
[[File:Measurement_Process_Model-Figure_1.png|thumb|600px|center|'''Figure 1. Four Key Measurement Process Activities (PSM 2011).''' Reprinted with permission of Practical Software and Systems Measurement ([http://www.psmsc.com PSM]). All other rights are reserved by the copyright owner.]]
  
===Risk Planning===
+
===Establish and Sustain Commitment===
Risk planning establishes and maintains a strategy for identifying, analyzing, handling, and monitoring risks within the project. The strategy, both the process and its implementation, isprocess and implementation of the strategy are documented in a risk management plan (RMP).
+
This activity focuses on establishing the resources, training, and tools to implement a measurement process and ensure that there is a management commitment to use the information that is produced. Refer to PSM (August 18, 2011) and SPC (2011) for additional detail.
  
The risk management process and its implementation should be tailored to each project and updated as appropriate throughout the life of the project.  The RMP should be transmitted in an appropriate means to the project team and key stakeholders.  
+
===Plan Measurement===
 +
This activity focuses on defining measures that provide insight into project or organization [[Information Need (glossary)|information needs]]. This includes identifying what the decision-makers need to know and when they need to know it, relaying these information needs to those entities in a manner that can be measured, and identifying, prioritizing, selecting, and specifying [[Measure (glossary)|measures]] based on project and organization processes (Jones 2003, 15-19). This activity also identifies the reporting format, forums, and target audience for the information provided by the measures.
  
The risk management strategy includes as necessary the risk management process of all supply chain suppliers and describes how risks from all suppliers will be raised to the next level(s) for incorporation in the project risk process.
+
Here are a few widely used approaches to identify the information needs and derive associated measures, where each can be focused on identifying measures that are needed for SE management:
  
The context of the Risk Management process should include a description of stakeholders’ perspectives, risk categories, and a description (perhaps by reference) of the technical and managerial objectives, assumptions and constraints. The risk categories include the relevant technical areas of the system and facilitate identification of risks across the life cycle of the system. As noted in ISO 31000, the aim of this step is to generate a comprehensive list of risks based on those events that might create, enhance, prevent, degrade, accelerate or delay the achievement of objectives.   
+
* The PSM approach, which uses a set of [[Information Category (glossary)|information categories]], [[Measurable Concept (glossary)|measurable concepts]], and candidate measures to aid the user in determining relevant information needs and the characteristics of those needs on which to focus (PSM August 18, 2011).
 +
* The (GQM) approach, which identifies explicit measurement goals. Each goal is decomposed into several questions that help in the selection of measures that address the question and provide insight into the goal achievement (Park, Goethert, and Florac 1996). 
 +
* Software Productivity Center’s (SPC's) 8-step Metrics Program, which also includes stating the goals and defining measures needed to gain insight for achieving the goals (SPC 2011).   
  
The RMP should contain key risk management information; Conrow (2003) identifies the following as key components of RMP:
+
The following are good sources for candidate measures that address information needs and measurable concepts/questions:
* a project summary
+
* PSM Web Site (PSM 2011)
* project acquisition and contracting strategies
+
* PSM Guide, Version 4.0, Chapters 3 and 5 (PSM 2000)
* key definitions
+
* SE Leading Indicators Guide, Version 2.0, Section 3  (Roedler et al. 2010)
* a list of key documents
+
* Technical Measurement Guide, Version 1.0, Section 10 (Roedler and Jones 2005, 1-65)
* process steps
+
* Safety Measurement (PSM White Paper), Version 3.0, Section 3.4 (Murdoch 2006, 60)
* inputs, tools and techniques, and outputs per process step
+
* Security Measurement (PSM White Paper), Version 3.0, Section 7 (Murdoch 2006, 67)
* linkages between risk management and other project processes
+
* Measuring Systems Interoperability, Section 5 and Appendix C (Kasunic and Anderson 2004)
* key ground rules and assumptions
+
* Measurement for Process Improvement (PSM Technical Report), version 1.0, Appendix E (Statz 2005)
* risk categories
 
* buyer and seller roles and responsibilities
 
* organizational and personnel roles and responsibilities 
 
  
Generally, the level of detail in an RMP is risk-driven, with simple plans for low risk projects and detailed plans for high risk projects.
+
The INCOSE ''SE Measurement Primer'' (Frenz et al. 2010) provides a list of attributes of a good measure with definitions for each [[Attribute (glossary)|attribute]]; these attributes include ''relevance, completeness, timeliness, simplicity, cost effectiveness, repeatability, and accuracy.''  Evaluating candidate measures against these attributes can help assure the selection of more effective measures.  
  
===Risk Identification===
+
The details of each measure need to be unambiguously defined and documented. Templates for the specification of measures and indicators are available on the PSM website (2011) and in Goethert and Siviy (2004).
Risk identification is the process of examining the project products, processes, and requirements to identify and document candidate risks. Risk identification should be performed continuously at the individual level as well as through formerly structured events at both regular intervals and following major program changes (e.g., project initiation, re-baselining, change in acquisition phase, etc.).
 
  
Conrow (2009) states that systems engineers should use one or more top-level approaches (e.g., work breakdown structure (WBS), key processes evaluation, key requirements evaluation, etc.) and one or more lower-level approaches (e.g., affinity, brainstorming, checklists and taxonomies, examining critical path activities, expert judgment, Ishikawa diagrams, etc.) in risk identification. For example, lower-level checklists and taxonomies exist for software risk identification (Conrow and Shishido 1997, 83-89, p. 84; Boehm 1989, 115-125, Carr et al. 1993, p. A-2) and operational risk identification (Gallagher et al. 2005, p. 4), and have been used on a wide variety of programs.  The top and lower-level approaches are essential but there is no single accepted method — all approaches should be examined and used as appropriate.  
+
===Perform Measurement===
 +
This activity focuses on the collection and preparation of measurement data, measurement analysis, and the presentation of the results to inform decision makers. The preparation of the measurement data includes verification, normalization, and aggregation of the data, as applicable. Analysis includes estimation, feasibility analysis of plans, and performance analysis of actual data against plans.  
  
Candidate risk documentation should include the following items where possible, as identified by Conrow (2003 p.198)
+
The quality of the measurement results is dependent on the collection and preparation of valid, accurate, and unbiased data. Data verification, validation, preparation, and analysis techniques are discussed in PSM (2011) and SEI (2010). Per TL 9000, ''Quality Management System Guidance'', ''The analysis step should integrate quantitative measurement results and other qualitative project information, in order to provide managers the feedback needed for effective decision making'' (QuEST Forum 2012, 5-10). This provides richer information that gives the users the broader picture and puts the information in the appropriate context.  
* risk title
 
* structured risk description
 
* applicable risk categories
 
* potential root causes
 
* relevant historical information
 
* responsible individual and manager
 
It is important to use structured risk descriptions such as an ''if-then'' format: ''if'' (an event occurs--trigger), ''then ''(an outcome or aeffect occurs). Another useful construct is a ''condition'' (that exists) that leads to a potential ''consequence'' (outcome) (Gluch 1994). These approaches help the analyst to better think through the potential nature of the risk.
 
  
Risk analysis and risk handling activities should only be performed on approved risks to ensure the best use of scarce resources and maintain focus on the correct risks.
+
There is a significant body of guidance available on good ways to present quantitative information. Edward Tufte has several books focused on the visualization of information, including ''The Visual Display of Quantitative Information'' (Tufte 2001).  
  
===Risk Analysis===
+
Other resources that contain further information pertaining to understanding and using measurement results include
Risk analysis is the process of systematically evaluating each identified, approved risk to estimate the probability of occurrence (likelihood) and consequence of occurrence (impact), and then converting the results to a corresponding risk level or rating.
+
* PSM (2011)
 +
* ISO/IEC/IEEE 15939, clauses 4.3.3 and 4.3.4
 +
* Roedler and Jones (2005), sections 6.4, 7.2, and 7.3
  
There is no ''best'' analysis approach for a given risk category. Risk scales and a corresponding matrix, simulations, and probabilistic risk assessments are often used for technical risks, while decision trees, simulations and payoff matrices are used for cost risk; and simulations are used for schedule risk. Risk analysis approaches are sometimes grouped into qualitative and quantitative methods. A structured, repeatable methodology should be used in order to increase analysis accuracy and reduce uncertainty over time.
+
===Evaluate Measurement===
 +
This activity involves the analysis of information that explains the periodic evaluation and improvement of the measurement process and specific measures. One objective is to ensure that the measures continue to align with the business goals and information needs, as well as provide useful insight. This activity should also evaluate the SE measurement activities, resources, and infrastructure to make sure it supports the needs of the project and organization. Refer to PSM (2011) and ''Practical Software Measurement: Objective Information for Decision Makers'' (McGarry et al. 2002) for additional detail.
  
The most common qualitative method (typically) uses ordinal probability and consequence scales coupled with a risk matrix (also known as a risk cube or mapping matrix) to convert the resulting values to a risk level. Here, one or more probability of occurrence scales, coupled with three consequences of occurrence scales (cost, performance, schedule) are typically used. Mathematical operations should not be performed on ordinal scale values to prevent erroneous results (Conrow 2003, p. 187-364).
+
==Systems Engineering Leading Indicators==
 +
Leading indicators are aimed at providing predictive insight that pertains to an information need. A SE leading indicator is ''a measure for evaluating the effectiveness of a how a specific activity is applied on a project in a manner that provides information about impacts that are likely to affect the system performance objectives'' (Roedler et al. 2010). Leading indicators may be individual measures or collections of measures and associated analysis that provide future systems engineering performance insight throughout the life cycle of the system; they ''support the effective management of systems engineering by providing visibility into expected project performance and potential future states'' (Roedler et al. 2010).  
  
Once the risk level for each risk is determined, the risks need to be prioritized. Prioritization is typically performed by risk level (e.g., low, medium, high), risk score (the pair of max (probability), max (consequence) values), and other considerations such as time-frame, frequency of occurrence, and interrelationship with other risks (Conrow 2003, pp. 187-364). An additional prioritization technique is to convert results into an estimated cost, performance, and schedule value (e.g., probability budget consequence). However, the result is only a point estimate and not a distribution of risk.
+
As shown in Figure 2, a leading indicator is composed of characteristics, a condition, and a predicted behavior.  The characteristics and conditions are analyzed on a periodic or as-needed basis to predict behavior within a given confidence level and within an accepted time range into the future. More information is also provided by Roedler et al. (2010).
  
Widely used quantitative methods include decision trees and the associated expected monetary value analysis (Clemen and Reilly 2001), modeling and simulation (Law 2007; Mun 2010; Vose 2000), payoff matrices (Kerzner 2009, p. 747-751), probabilistic risk assessments (Kumamoto and Henley 1996; NASA 2002), and other techniques. Risk prioritization can directly result from the quantitative methods employed. For quantitative approaches, care is needed in developing the model structure, since the results will only be as good as the accuracy of the structure, coupled with the characteristics of probability estimates or distributions used to model the risks  (Law 2007; Evans, Hastings, and Peacock 2011).
+
[[File:Composition_of_Leading_Indicator-Figure_2.png|thumb|500px|center|'''Figure 2. Composition of a Leading Indicator (Roedler et al. 2010).''' Reprinted with permission of the International Council on Systems Engineering ([http://www.incose.com INCOSE]) and Practical Software and Systems Measurement ([http://www.psmsc.com PSM]). All other rights are reserved by the copyright owner.]]
  
If multiple risk facets exist for a given item (e.g., cost risk, schedule risk, and technical risk) the different results should be integrated into a cohesive three-dimensional ''picture'' of risk. Sensitivity analyses can be applied to both qualitative and quantitative approaches in an attempt to understand how potential variability will affect results. Particular emphasis should be paid to compound risks (e.g., highly coupled technical risks with inadequate fixed budgets and schedules).
+
==Technical Measurement==
 +
Technical measurement is the set of measurement activities used to provide information about progress in the definition and development of the technical solution, ongoing assessment of the associated risks and issues, and the likelihood of meeting the critical objectives of the [[Acquirer (glossary)|acquirer]]. This insight helps an engineer make better decisions throughout the life cycle of a system and increase the probability of delivering a technical solution that meets both the specified requirements and the mission needs. The insight is also used in trade-off decisions when performance is not within the thresholds or goals.
  
===Risk Handling===
+
Technical measurement includes [[Measure of Effectiveness (MoE) (glossary)|measures of effectiveness]] (MOEs), [[Measure of Performance (MoP) (glossary)|measures of performance]] (MOPs), and [[Technical Performance Measure (TPM) (glossary)|technical performance measures]] (TPMs) (Roedler and Jones 2005, 1-65). The relationships between these types of technical measures are shown in Figure 3 and explained in the reference for Figure 3. Using the measurement process described above, technical measurement can be planned early in the life cycle and then performed throughout the life cycle with increasing levels of fidelity as the technical solution is developed, facilitating predictive insight and preventive or corrective actions. More information about technical measurement can be found in the ''[[NASA Systems Engineering Handbook]]'', ''System Analysis, Design, Development: Concepts, Principles, and Practices'', and the ''[[Systems Engineering Leading Indicators Guide]]'' (NASA December 2007, 1-360, Section 6.7.2.2; Wasson 2006, Chapter 34; Roedler and Jones 2005).
Risk handling is the process that identifies and selects options and implements the desired option to reduce a risk to an acceptable level, given program constraints (budget, other resources) and objectives (DAU 2003a, 20-23, 70-78).  
 
  
For a given {{Term|System-of-Interest (glossary)|system-of-interest}} (SoI), risk handling is primarily performed at two levels. At the system level, the overall ensemble of system risks is initially determined and prioritized and second-level draft risk element plans (REP's) are prepared for handling the risks. For more complex systems, it is important that the REP's at the higher SoI level are kept consistent with the system RMPs at the lower SoI level, and that the top-level RMP preserves continuing risk traceability across the SoI.
+
[[File:Technical_Measures_Relationship-Figure_3.png|thumb|600px|center|'''Figure 3. Relationship of the Technical Measures (Roedler et al 2010).''' Reprinted with permission of the International Council on Systems Engineering ([http://www.psmsc.com INCOSE]) and Practical Software and Systems Measurement ([http://www.psmsc.com PSM]). All other rights are reserved by the copyright owner.]]
  
The risk handling strategy selected is the combination of the most desirable risk handling option coupled with a suitable implementation approach for that option (Conrow 2003). Risk handling options include assumption, avoidance, control (mitigation), and transfer. All four options should be evaluated and the best one chosen for each risk. An appropriate implementation approach is then chosen for that option. Hybrid strategies can be developed that include more than one risk handling option, but with a single implementation approach. Additional risk handling strategies can also be developed for a given risk and either implemented in parallel with the primary strategy or be made a contingent strategy that is implemented if a particular trigger event occurs during the execution of the primary strategy. Often, this choice is difficult because of uncertainties in the risk probabilities and impacts. In such cases, buying information to reduce risk uncertainty via prototypes, benchmarking, surveying, modeling, etc. will clarify risk handling decisions (Boehm 1981).
+
==Service Measurement==
 +
The same measurement activities can be applied for service measurement; however, the context and measures will be different. Service providers have a need to balance efficiency and effectiveness, which may be opposing objectives. Good service measures are outcome-based, focus on elements important to the customer (e.g., service availability, reliability, performance, etc.), and provide timely, forward-looking information.  
  
====Risk Handling Plans====
+
For services, the terms critical success factors (CSF) and key performance indicators (KPI) are used often when discussing measurement.  CSFs are the key elements of the service or service infrastructure that are most important to achieve the business objectives.  KPIs are specific values or characteristics measured to assess achievement of those objectives.  
A risk handling plan (RHP - a REP at the system level), should be developed and implemented for all ''high'' and ''medium'' risks and selected ''low'' risks as warranted.
 
  
As identified by Conrow (2003, 365-387), each RHP should include:
+
More information about service measurement can be found in the ''Service Design'' and ''Continual Service Improvement'' volumes of BMP (2010, 1). More information on service SE can be found in the [[Service Systems Engineering]] article.
* a risk owner and management contacts
 
* selected option
 
* implementation approach
 
* estimated probability and consequence of occurrence levels at the start and conclusion of each activity
 
* specific measurable exit criteria for each activity
 
* appropriate metrics
 
* resources needed to implement the RHP
 
 
 
Metrics included in each RHP should provide an objective means of determining whether the risk handling strategy is on track and whether it needs to be updated. On larger projects these can include earned value, variation in schedule and technical performance measures (TPMs), and changes in risk level vs. time.
 
 
 
The activities present in each RHP should be integrated into the project’s integrated master schedule or equivalent; otherwise there will be ineffective risk monitoring and controlthe risk monitoring and control will be ineffective.
 
 
 
===Risk Monitoring===
 
Risk monitoring is used to evaluate the effectiveness of risk handling activities against established metrics and provide feedback to the other risk management process steps. Risk monitoring results may also provide a basis to update RHPs, develop additional risk handling options and approaches, and re-analyze risks. In some cases, monitoring results may also be used to identify new risks, revise an existing risk with a new facet, or revise some aspects of risk planning (DAU 2003a, p. 20). Some risk monitoring approaches that can be applied include earned value, program metrics, TPMs, schedule analysis, and variations in risk level. Risk monitoring approaches should be updated and evaluated at the same time and WBS level;, otherwise, the results may be inconsistent.
 
 
 
==Opportunity and Opportunity Management==
 
In principle, opportunity management is the duality to risk management, with two components: (1) probability of achieving an improved outcome and (2) impact of achieving the outcome. Thus, both should be addressed in risk management planning and execution. In practice, however, a positive opportunity exposure will not match a negative risk exposure in utility space, since the positive utility magnitude of improving an expected outcome is considerably less than the negative utility magnitude of failing to meet an expected outcome (Canada 1971; Kahneman-Tversky 1979). Further, since many opportunity -management initiatives have failed to anticipate serious side effects, all candidate opportunities should be thoroughly evaluated for potential risks to prevent unintended consequences from occurring.
 
 
 
In addition, while opportunities may provide potential benefits for the system or project, each opportunity pursued may have associated risks that detract from the expected benefit. This may reduce the ability to achieve the anticipated effects of the opportunity, in addition to any limitations associated with not pursing an opportunity.
 
  
 
==Linkages to Other Systems Engineering Management Topics==
 
==Linkages to Other Systems Engineering Management Topics==
The [[Measurement|measurement]] process provides indicators for risk analysis. Project [[Planning| planning]] involves the identification of risk and planning for stakeholder involvementProject [[Assessment and Control|assessment and control]] monitors project risks. [[Decision Management|Decision management]] evaluates alternatives for selection and handling of identified and analyzed risks.
+
SE measurement has linkages to other SEM topics. The following are a few key linkages adapted from Roedler and Jones (2005):
 +
* [[Planning]] – SE measurement provides the historical data and supports the estimation for, and feasibility analysis of, the plans for realistic planning.   
 +
* [[Assessment and Control]] – SE measurement provides the objective information needed to perform the assessment and determination of appropriate control actions. The use of leading indicators allows for early assessment and control actions that identify risks and/or provide insight to allow early treatment of risks to minimize potential impacts.
 +
* [[Risk Management]] – SE risk management identifies the information needs that can impact project and organizational performance. SE measurement data helps to quantify risks and subsequently provides information about whether risks have been successfully managed.
 +
*[[Decision Management]] – SE Measurement results inform decision making by providing objective insight.
  
 
==Practical Considerations==
 
==Practical Considerations==
Key pitfalls and good practices related to systems engineering risk management are described in the next two sections.  
+
Key pitfalls and good practices related to SE measurement are described in the next two sections.
  
 
===Pitfalls===
 
===Pitfalls===
Some of the key pitfalls encountered in performing risk management are below in Table 1.
+
Some of the key pitfalls encountered in planning and performing SE Measurement are provided in Table 1.
  
 
{|   
 
{|   
|+ '''Table 1. Risk Management Pitfalls.''' (SEBoK Original)
+
|+'''Table 1. Measurement Pitfalls.''' (SEBoK Original)
|-
 
 
! Name
 
! Name
 
! Description
 
! Description
 
|-
 
|-
| Process Over-Reliance
+
| Golden Measures
|
 
* Over-reliance on the process side of risk management without sufficient attention to human and organizational behavioral considerations.
 
|-
 
|Lack of Continuity
 
|
 
* Failure to implement risk management as a continuous process.  Risk management will be ineffective if it’s done just to satisfy project reviews or other discrete criteria.  (Charette, Dwinnell, and McGarry 2004, 18-24 and Scheinin 2008).
 
|-
 
|Tool and Technique Over-Reliance
 
|
 
* Over-reliance on tools and techniques, with insufficient thought and resources expended on how the process will be implemented and run on a day-to-day basis.
 
|-
 
| Lack of Vigilance
 
 
|
 
|
* A comprehensive risk identification will generally not capture all risks; some risks will always escape detection, which reinforces the need for risk identification to be performed continuously.
+
* Looking for the one measure or small set of measures that applies to all projects. 
 +
* No one-size-fits-all measure or measurement set exists. 
 +
* Each project has unique information needs (e.g., objectives, risks, and issues).
 +
* The one exception is that, in some cases with consistent product lines, processes, and information needs, a small core set of measures may be defined for use across an organization.
 
|-
 
|-
|Automatic Mitigation Selection
+
|Single-Pass Perspective
 
|
 
|
* Automatically select the risk handling mitigation option, rather than evaluating all four options in an unbiased fashion and choosing the “best” option.
+
* Viewing measurement as a single-pass activity.
 +
* To be effective, measurement needs to be performed continuously, including the periodic identification and prioritization of information needs and associated measures.  
 
|-
 
|-
|Sea of Green
+
|Unknown Information Need
 
|
 
|
* Tracking progress of the risk handling plan, while the plan itself may not adequately include steps to reduce the risk to an acceptable levelProgress indicators may appear “green” (acceptable) associated with the risk handling plan:  budgeting, staffing, organizing, data gathering, model preparation, etc. However, the risk itself may be largely unaffected if the handling strategy and the resulting plan are poorly developed, do not address potential root cause(s), and do not incorporate actions that will effectively resolve the risk.
+
*Performing measurement activities without the understanding of why the measures are needed and what information they provide.   
 +
*This can lead to wasted effort.  
 
|-
 
|-
|Band-Aid Risk Handling
+
|Inappropriate Usage
 
|
 
|
* Handling risks (e.g., interoperability problems with changes in external systems) by patching each instance, rather than addressing the root cause(s) and reducing the likelihood of future instances.
+
*Using measurement inappropriately, such as measuring the performance of individuals or makinng interpretations without context information.  
 +
*This can lead to bias in the results or incorrect interpretations.
 
|}
 
|}
  
 
===Good Practices===
 
===Good Practices===
Some good practices gathered from the references are below in Table 2.
+
Some good practices, gathered from the references are provided in Table 2.
  
{|
+
{|
|+ '''Table 2. Risk Management Good Practices.''' (SEBoK Original)
+
|+'''Table 2. Measurement Good Practices.''' (SEBoK Original)
|-
 
 
! Name
 
! Name
 
! Description
 
! Description
 
|-
 
|-
| Top Down and Bottom Up
+
| Periodic Review
|  
+
|
* Risk management should be both “top down” and “bottom up” in order to be effectiveThe project manager or deputy need to own the process at the top level,  but risk management principles should be considered and used by all project personnel.
+
* Regularly review each measure collected.   
 
|-
 
|-
| Early Planning
+
|Action Driven
 
|
 
|
* Include the planning process step in the risk management process. Failure to adequately perform risk planning early in the project phase contributes to ineffective risk management.
+
* Measurement by itself does not control or improve process performance.  
 +
* Measurement results should be provided to decision makers for appropriate action.
 
|-
 
|-
| Risk Analysis Limitations
+
|Integration into Project Processes
 
|
 
|
* Understand the limitations of risk analysis tools and techniques. Risk analysis results should be challenged because considerable input uncertainty and/or potential errors may exist.
+
* SE Measurement should be integrated into the project as part of the ongoing project business rhythm.
 +
* Data should be collected as processes are performed, not recreated as an afterthought.  
 
|-
 
|-
| Robust Risk Handling Strategy
+
| Timely Information
 
|
 
|
* The risk handling strategy should attempt to reduce both the probability and consequence of occurrence termsIt is also imperative that the resources needed to properly implement the chosen strategy be available in a timely manner, else the risk handling strategy, and the entire risk management process, will be viewed as a “paper tiger.
+
* Information should be obtained early enough to allow necessary action to control or treat risks, adjust tactics and strategies, etc. 
 
+
* When such actions are not successful, measurement results need to help decision-makers determine contingency actions or correct problems.   
 +
|-
 +
|Relevance to Decision Makers
 +
|
 +
* Successful measurement requires the communication of meaningful information to the decision-makers. 
 +
* Results should be presented in the decision-makers preferred format.
 +
*Allows accurate and expeditious interpretation of the results. 
 +
|-
 +
|Data Availability
 +
|
 +
* Decisions can rarely wait for a complete or perfect set of data, so measurement information often needs to be derived from analysis of the best available data, complemented by real-time events and qualitative insight (including experience).  
 
|-
 
|-
| Structured Risk Monitoring
+
|Historical Data
 
|
 
|
* Risk monitoring should be a structured approach to compare actual vs. anticipated cost, performance, schedule, and risk outcomes associated with implementing the RHP.  When ad-hoc or unstructured approaches are used, or when risk level vs. time is the only metric tracked, the resulting risk monitoring usefulness can be greatly reduced.
+
* Use historical data as the basis of plans, measure what is planned versus what is achieved, archive actual achieved results, and use archived data as a historical basis for the next planning effort.  
 
|-
 
|-
| Update Risk Database
+
|Information Model
 
|
 
|
*The risk management database (registry) should be updated throughout the course of the program, striking a balance between excessive resources required and insufficient updates performed.  Database updates should occur at both a tailored, regular interval and following major program changes.
+
* The information model defined in ISO/IEC/IEEE (2007) provides a means to link the entities that are measured to the associated measures and to the identified information need, and also describes how the measures are converted into indicators that provide insight to decision-makers.  
 +
|}
  
|}
+
Additional information can be found in the ''[[Systems Engineering Measurement Primer]]'', Section 4.2 (Frenz et al. 2010), and INCOSE ''Systems Engineering Handbook'', Section 5.7.1.5 (2012).
  
 
==References==  
 
==References==  
===Works Cited===
 
Boehm, B. 1981. ''Software Engineering Economics''. Upper Saddle River, NJ, USA: Prentice Hall.
 
  
Boehm, B. 1989. ''Software Risk Management''. Los Alamitos, CA; Tokyo, Japan: IEEE Computer Society Press: 115-125.
 
  
Canada, J.R. 1971. ''Intermediate Economic Analysis for Management and Engineering''. Upper Saddle River, NJ, USA: Prentice Hall.
+
===Works Cited===
 +
Frenz, P., G. Roedler, D.J. Gantzer, P. Baxter. 2010. ''[[Systems Engineering Measurement Primer]]: A Basic Introduction to Measurement Concepts and Use for Systems Engineering.'' Version 2.0. San Diego, CA: International Council on System Engineering (INCOSE).  INCOSE‐TP‐2010‐005‐02. Accessed April 13, 2015 at  http://www.incose.org/ProductsPublications/techpublications/PrimerMeasurement
  
Carr, M., S. Konda, I. Monarch, F. Ulrich, and C. Walker. 1993. ''Taxonomy-based risk identification''. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU), CMU/SEI-93-TR-6.
+
INCOSE. 2012. ''Systems Engineering Handbook: A Guide for System Life Cycle Processes and Activities,'' version 3.2.2. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2.2.
  
Charette, R., L. Dwinnell, and J. McGarry. 2004. "Understanding the roots of process performance failure." ''CROSSTALK: The Journal of Defense Software Engineering'' (August 2004): 18-24.
+
ISO/IEC/IEEE. 2007. ''[[ISO/IEC/IEEE 15939|Systems and software engineering - Measurement process]]''. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC), [[ISO/IEC/IEEE 15939]]:2007.  
  
Clemen, R., and T. Reilly. 2001. ''Making hard decisions''. Boston, MA, USA: Duxbury.
+
ISO/IEC/IEEE. 2015. ''[[ISO/IEC/IEEE 15288|Systems and Software Engineering -- System Life Cycle Processes]]''. Geneva, Switzerland: International Organisation for Standardisation / International Electrotechnical Commissions / Institute of Electrical and Electronics Engineers. ISO/IEC/IEEE 15288:2015.  
  
Conrow, E. 2003. ''[[Effective Risk Management: Some Keys to Success]],'' 2nd ed. Reston, VA, USA: American Institute of Aeronautics and Astronautics (AIAA).
+
Kasunic, M. and W. Anderson. 2004. ''Measuring Systems Interoperability: Challenges and Opportunities.'' Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU).  
  
Conrow, E. 2008. "Risk analysis for space systems." Paper presented at Space Systems Engineering and Risk Management Symposium, 27-29 February, 2008, Los Angeles, CA, USA.
+
McGarry, J., D. Card, C. Jones, B. Layman, E. Clark, J.Dean, F. Hall. 2002. ''Practical Software Measurement: Objective Information for Decision Makers''. Boston, MA, USA: Addison-Wesley.
  
Conrow, E. and P. Shishido. 1997. "Implementing risk management on software intensive projects." IEEE ''Software.'' 14(3) (May/June 1997): 83-9.
+
NASA. 2007. ''[[NASA Systems Engineering Handbook|Systems Engineering Handbook]].'' Washington, DC, USA: National Aeronautics and Space Administration (NASA), December 2007. NASA/SP-2007-6105.
  
DAU. 2003a. ''Risk Management Guide for DoD Acquisition: Fifth Edition,'' version 2. Ft. Belvoir, VA, USA: Defense Acquisition University (DAU) Press.
+
Park, R.E., W.B. Goethert, and W.A. Florac. 1996. ''Goal-Driven Software Measurement – A Guidebook''. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU), CMU/SEI-96-BH-002.  
  
DAU. 2003b. ''U.S. Department of Defense extension to: A guide to the project management body of knowledge (PMBOK(R) guide), first edition''. Version 1. 1st ed. Ft. Belvoir, VA, USA: Defense Acquisition University (DAU) Press.
+
PSM. 2011. "Practical Software and Systems Measurement." Accessed August 18, 2011. Available at: http://www.psmsc.com/.
  
DoD. 2015. ''Risk, Issue, and Opportunity Management Guide for Defense Acquisition Programs''. Washington, DC, USA: Office of the Deputy Assistant Secretary of Defense for Systems Engineering/Department of Defense.
+
PSM. 2000. ''[[Practical Software and Systems Measurement (PSM) Guide]],'' version 4.0c. Practical Software and System Measurement Support Center. Available at: http://www.psmsc.com/PSMGuide.asp.
  
Evans, M., N. Hastings, and B. Peacock. 2000. ''Statistical Distributions,'' 3rd ed. New York, NY, USA: Wiley-Interscience.
+
PSM Safety & Security TWG. 2006. ''Safety Measurement,'' version 3.0. Practical Software and Systems Measurement. Available at: http://www.psmsc.com/Downloads/TechnologyPapers/SafetyWhitePaper_v3.0.pdf.
  
Forbes, C., M. Evans, N. Hastings, and B. Peacock. 2011. “Statistical Distributions,” 4th ed. New York, NY, USA.
+
PSM Safety & Security TWG. 2006. ''Security Measurement,'' version 3.0. Practical Software and Systems Measurement. Available at: http://www.psmsc.com/Downloads/TechnologyPapers/SecurityWhitePaper_v3.0.pdf.
  
Gallagher, B., P. Case, R. Creel, S. Kushner, and R. Williams. 2005. ''A taxonomy of operational risk''. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU), CMU/SEI-2005-TN-036.
+
QuEST Forum. 2012. ''Quality Management System (QMS) Measurements Handbook,'' Release 5.0. Plano, TX, USA: Quest Forum.
  
Gluch, P. 1994. ''A Construct for Describing Software Development Risks''. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU), CMU/SEI-94-TR-14.
+
Roedler, G., D. Rhodes, C. Jones, and H. Schimmoller. 2010. ''[[Systems Engineering Leading Indicators Guide]],'' version 2.0. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2005-001-03.  
  
ISO/IEC/IEEE. 2015. ''Systems and Software Engineering -- System Life Cycle Processes''. Geneva, Switzerland: International Organisation for Standardisation / International Electrotechnical Commissions / Institute of Electrical and Electronics Engineers. ISO/IEC/IEEE 15288:2015.
+
Roedler, G. and C. Jones. 2005. ''[[Technical Measurement Guide]],'' version 1.0. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-020-01.
  
Kerzner, H. 2009. ''Project Management: A Systems Approach to Planning, Scheduling, and Controlling.'' 10th ed. Hoboken, NJ, USA: John Wiley & Sons.
+
SEI. 2010. "Measurement and Analysis Process Area" in ''Capability Maturity Model Integrated (CMMI) for Development'', version 1.3. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU).
  
Kahneman, D., and A. Tversky. 1979. "Prospect theory: An analysis of decision under risk." ''Econometrica.'' 47(2) (Mar., 1979): 263-292.
+
Software Productivity Center, Inc. 2011. Software Productivity Center web site. August 20, 2011. Available at: http://www.spc.ca/
  
Kumamoto, H. and E. Henley. 1996.  ''Probabilistic Risk Assessment and Management for Engineers and Scientists,'' 2nd ed. Piscataway, NJ, USA: Institute of Electrical and Electronics Engineers (IEEE) Press.
+
Statz, J. et al. 2005. ''Measurement for Process Improvement,'' version 1.0. York, UK: Practical Software and Systems Measurement (PSM).
  
Law, A. 2007. ''Simulation Modeling and Analysis,'' 4th ed. New York, NY, USA: McGraw Hill.
+
Tufte, E. 2006. ''The Visual Display of Quantitative Information.'' Cheshire, CT, USA: Graphics Press.
  
Mun, J. 2010. ''Modeling Risk,'' 2nd ed. Hoboken, NJ, USA: John Wiley & Sons.
+
Wasson, C. 2005. ''System Analysis, Design, Development: Concepts, Principles, and Practices''. Hoboken, NJ, USA: John Wiley and Sons.
  
NASA. 2002. ''Probabilistic Risk Assessment Procedures Guide for NASA Managers and Practitioners,'' version 1.1. Washington, DC, USA: Office of Safety and Mission Assurance/National Aeronautics and Space Administration (NASA).
+
===Primary References===
 
 
PMI. 2013. ''[[A Guide to the Project Management Body of Knowledge|A Guide to the Project Management Body of Knowledge (PMBOK® Guide)]]'', 5th ed. Newtown Square, PA, USA: Project Management Institute (PMI).
 
 
 
Scheinin, W. 2008. "Start Early and Often: The Need for Persistent Risk Management in the Early Acquisition Phases." Paper presented at Space Systems Engineering and Risk Management Symposium, 27-29 February 2008, Los Angeles, CA, USA.
 
 
 
SEI. 2010. ''[[Capability Maturity Model Integrated (CMMI) for Development]],'' version 1.3. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU).
 
 
 
Vose, D. 2000. ''Quantitative Risk Analysis,'' 2nd ed. New York, NY, USA: John Wiley & Sons.
 
 
 
Willis, H.H., A.R. Morral, T.K. Kelly, and J.J. Medby. 2005. ''Estimating Terrorism Risk''. Santa Monica, CA, USA: The RAND Corporation, MG-388.
 
  
===Primary References===
+
Frenz, P., G. Roedler, D.J. Gantzer, P. Baxter. 2010. ''[[Systems Engineering Measurement Primer]]: A Basic Introduction to Measurement Concepts and Use for Systems Engineering.'' Version 2.0. San Diego, CA: International Council on System Engineering (INCOSE).  INCOSE‐TP‐2010‐005‐02. Accessed April 13, 2015 at  http://www.incose.org/ProductsPublications/techpublications/PrimerMeasurement
Boehm, B. 1981. ''[[Software Engineering Economics]].'' Upper Saddle River, NJ, USA:Prentice Hall.  
 
  
Boehm, B. 1989. ''[[Software Risk Management]].'' Los Alamitos, CA; Tokyo, Japan: IEEE Computer Society Press, p. 115-125.
+
ISO/IEC/IEEE. 2007. ''[[ISO/IEC/IEEE 15939|Systems and Software Engineering - Measurement Process]]''. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC), [[ISO/IEC/IEEE 15939]]:2007.  
  
Conrow, E.H. 2003. ''[[Effective Risk Management: Some Keys to Success]],'' 2nd ed. Reston, VA, USA: American Institute of Aeronautics and Astronautics (AIAA).
+
PSM. 2000. ''[[Practical Software and Systems Measurement (PSM) Guide]],'' version 4.0c. Practical Software and System Measurement Support Center. Available at: http://www.psmsc.com.
  
DoD. 2015. [[Risk Management Guide for DoD Acquisition|Risk, Issue, and Opportunity Management Guide for Defense Acquisition Programs]]. Washington, DC, USA: Office of the Deputy Assistant Secretary of Defense for Systems Engineering/Department of Defense.
+
Roedler, G., D. Rhodes, C. Jones, and H. Schimmoller. 2010. ''[[Systems Engineering Leading Indicators Guide]],'' version 2.0. San Diego, CA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2005-001-03.  
  
SEI. 2010. ''[[Capability Maturity Model Integrated (CMMI) for Development]],'' version 1.3. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU).
+
Roedler, G. and C.Jones. 2005. ''[[Technical Measurement Guide]],'' version 1.0. San Diego, CA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-020-01.
  
 
===Additional References===
 
===Additional References===
Canada, J.R. 1971. ''Intermediate Economic Analysis for Management and Engineering''. Upper Saddle River, NJ, USA: Prentice Hall.
+
Kasunic, M. and W. Anderson. 2004. ''Measuring Systems Interoperability: Challenges and Opportunities.'' Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU).  
 
 
Carr, M., S. Konda, I. Monarch, F. Ulrich, and C. Walker. 1993. ''Taxonomy-based risk identification''. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU), CMU/SEI-93-TR-6.
 
 
 
Charette, R. 1990. ''Application Strategies for Risk Management''. New York, NY, USA: McGraw-Hill.
 
 
 
Charette, R.  1989.  ''Software Engineering Risk Analysis and Management.'' New York, NY, USA: McGraw-Hill (MultiScience Press).
 
 
 
Charette, R., L. Dwinnell, and J. McGarry. 2004. "Understanding the roots of process performance failure." ''CROSSTALK: The Journal of Defense Software Engineering'' (August 2004): 18-24.
 
 
 
Clemen, R., and T. Reilly. 2001. ''Making hard decisions''. Boston, MA, USA: Duxbury.
 
 
 
Conrow, E. 2010. "Space program schedule change probability distributions." Paper presented at American Institute of Aeronautics and Astronautics (AIAA) Space 2010, 1 September 2010, Anaheim, CA, USA.
 
 
 
Conrow, E. 2009. "Tailoring risk management to increase effectiveness on your project." Presentation to the Project Management Institute, Los Angeles Chapter, 16 April, 2009, Los Angeles, CA.
 
 
 
Conrow, E. 2008. "Risk analysis for space systems." Paper presented at Space Systems Engineering and Risk Management Symposium, 27-29 February, 2008, Los Angeles, CA, USA. 
 
 
 
Conrow, E. and P. Shishido. 1997. "Implementing risk management on software intensive projects." IEEE ''Software.'' 14(3) (May/June 1997): 83-9.
 
 
 
DAU. 2003a. ''Risk Management Guide for DoD Acquisition: Fifth Edition.'' Version 2.  Ft. Belvoir, VA, USA: Defense Acquisition University (DAU) Press.
 
 
 
DAU. 2003b. ''U.S. Department of Defense extension to: A guide to the project management body of knowledge (PMBOK(R) guide),'' 1st ed. Ft. Belvoir, VA, USA: Defense Acquisition University (DAU) Press.
 
 
 
Dorofee, A., J. Walker, C. Alberts, R. Higuera, R. Murphy, and R. Williams (eds). 1996. ''Continuous Risk Management Guidebook.'' Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU).
 
 
 
Gallagher, B., P. Case, R. Creel, S. Kushner, and R. Williams. 2005. ''A taxonomy of operational risk''. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU), CMU/SEI-2005-TN-036.
 
 
 
Gluch, P. 1994. ''A Construct for Describing Software Development Risks''. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU), CMU/SEI-94-TR-14.
 
 
 
Haimes, Y.Y. 2009. ''Risk Modeling, Assessment, and Management''. Hoboken, NJ, USA: John Wiley & Sons, Inc. 
 
 
 
Hall, E. 1998.'' Managing Risk: Methods for Software Systems Development.'' New York, NY, USA: Addison Wesley Professional.
 
 
 
INCOSE. 2015. ''Systems Engineering Handbook: A Guide for System Life Cycle Processes and Activities,'' version 4. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2014-001-04.
 
 
 
ISO. 2009. ''Risk Management—Principles and Guidelines''. Geneva, Switzerland: International Organization for Standardization (ISO), ISO 31000:2009.
 
 
 
ISO/IEC. 2009. ''Risk Management—Risk Assessment Techniques''. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC), ISO/IEC 31010:2009.
 
 
 
ISO/IEC/IEEE. 2006. ''Systems and Software Engineering - Risk Management''. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC)/Institute of Electrical and Electronics Engineers (IEEE). ISO/IEC/IEEE 16085.
 
 
 
ISO. 2003. ''Space Systems - Risk Management.'' Geneva, Switzerland: International Organization for Standardization (ISO), ISO 17666:2003.
 
 
 
Jones, C. 1994. ''Assessment and Control of Software Risks.'' Upper Saddle River, NJ, USA: Prentice-Hall. 
 
 
 
Kahneman, D. and A. Tversky. 1979.  "Prospect theory: An analysis of decision under risk." ''Econometrica.'' 47(2) (Mar., 1979): 263-292.
 
 
 
Kerzner, H. 2009. ''Project Management: A Systems Approach to Planning, Scheduling, and Controlling,'' 10th ed. Hoboken, NJ: John Wiley & Sons. 
 
  
Kumamoto, H., and E. Henley. 1996.  ''Probabilistic Risk Assessment and Management for Engineers and Scientists,'' 2nd ed. Piscataway, NJ, USA: Institute of Electrical and Electronics Engineers (IEEE) Press.
+
McGarry, J. et al. 2002. ''Practical Software Measurement: Objective Information for Decision Makers''. Boston, MA, USA: Addison-Wesley
  
Law, A. 2007. ''Simulation Modeling and Analysis,'' 4th ed. New York, NY, USA: McGraw Hill.
+
NASA. 2007. ''[[NASA Systems Engineering Handbook]].'' Washington, DC, USA: National Aeronautics and Space Administration (NASA), December 2007. NASA/SP-2007-6105.
  
MITRE. 2012. ''Systems Engineering Guide to Risk Management.'' Available online: http://www.mitre.org/work/systems_engineering/guide/acquisition_systems_engineering/risk_management/.  Accessed on July 7, 2012.  Page last updated on May 8, 2012.
+
Park, Goethert, and Florac. 1996. ''Goal-Driven Software Measurement – A Guidebook''. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU), CMU/SEI-96-BH-002.  
  
Mun, J. 2010. ''Modeling Risk,'' 2nd ed. Hoboken, NJ, USA: John Wiley & Sons.
+
PSM. 2011. "Practical Software and Systems Measurement." Accessed August 18, 2011. Available at: http://www.psmsc.com/.
  
NASA. 2002. ''Probabilistic Risk Assessment Procedures Guide for NASA Managers and Practitioners,'' version 1.1. Washington, DC, USA: Office of Safety and Mission Assurance/National Aeronautics and Space Administration (NASA).
+
PSM Safety & Security TWG. 2006. ''Safety Measurement,'' version 3.0. Practical Software and Systems Measurement. Available at: http://www.psmsc.com/Downloads/TechnologyPapers/SafetyWhitePaper_v3.0.pdf.
  
PMI. 2013. ''[[A Guide to the Project Management Body of Knowledge|A Guide to the Project Management Body of Knowledge (PMBOK® Guide)]]'', 5th ed. Newtown Square, PA, USA: Project Management Institute (PMI).
+
PSM Safety & Security TWG. 2006. ''Security Measurement,'' version 3.0. Practical Software and Systems Measurement. Available at: http://www.psmsc.com/Downloads/TechnologyPapers/SecurityWhitePaper_v3.0.pdf.
  
Scheinin, W. 2008. "Start Early and Often: The Need for Persistent Risk Management in the Early Acquisition Phases." Paper presented at Space Systems Engineering and Risk Management Symposium, 27-29 February 2008, Los Angeles, CA, USA.
+
SEI. 2010. "Measurement and Analysis Process Area" in ''Capability Maturity Model Integrated (CMMI) for Development'', version 1.3. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU).
  
USAF. 2005. ''SMC systems engineering primer & handbook: Concepts, processes, and techniques,'' 3rd ed. Los Angeles, CA, USA: Space & Missile Systems Center/U.S. Air Force (USAF). 
+
Software Productivity Center, Inc. 2011. Software Productivity Center web site. August 20, 2011. Available at: http://www.spc.ca/
  
USAF. 2014. ‘’SMC Risk Management Process Guide''. Version 2. Los Angeles, CA, USA: Space & Missile Systems Center/U.S. Air Force (USAF).
+
Statz, J. 2005. ''Measurement for Process Improvement,'' version 1.0. York, UK: Practical Software and Systems Measurement (PSM).
  
Vose, D. 2000. ''Quantitative Risk Analysis.'' 2nd ed. New York, NY, USA: John Wiley & Sons.
+
Tufte, E. 2006. ''The Visual Display of Quantitative Information.'' Cheshire, CT, USA: Graphics Press.
  
Willis, H.H., A.R. Morral, T.K. Kelly, and J.J. Medby. 2005. ''Estimating Terrorism Risk''. Santa Monica, CA, USA: The RAND Corporation, MG-388.
+
Wasson, C. 2005. ''System Analysis, Design, Development: Concepts, Principles, and Practices''. Hoboken, NJ, USA: John Wiley and Sons.
  
 
----
 
----
<center>[[Assessment and Control|< Previous Article]] | [[Systems Engineering Management|Parent Article]] | [[Measurement|Next Article >]]</center>
+
<center>[[Risk Management|< Previous Article]] | [[Systems Engineering Management|Parent Article]] | [[Decision Management|Next Article >]]</center>
  
 
<center>'''SEBoK v. 2.0, released 1 June 2019'''</center>
 
<center>'''SEBoK v. 2.0, released 1 June 2019'''</center>

Revision as of 02:59, 19 October 2019

Measurement and the accompanying analysis are fundamental elements of systems engineering (SE) and technical management. SE measurement provides information relating to the products developed, services provided, and processes implemented to support effective management of the processes and to objectively evaluate product or service quality. Measurement supports realistic planning, provides insight into actual performance, and facilitates assessment of suitable actions (Roedler and Jones 2005, 1-65; Frenz et al. 2010).

Appropriate measures and indicators are essential inputs to tradeoff analyses to balance cost, schedule, and technical objectives. Periodic analysis of the relationships between measurement results and review of the requirements and attributes of the system provides insights that help to identify issues early, when they can be resolved with less impact. Historical data, together with project or organizational context information, forms the basis for the predictive models and methods that should be used.

Fundamental Concepts

The discussion of measurement in this article is based on some fundamental concepts. Roedler et al. (2005, 1-65) states three key SE measurement concepts that are paraphrased here:

  1. SE measurement is a consistent but flexible process that is tailored to the unique information needs and characteristics of a particular project or organization and revised as information needs change.
  2. Decision makers must understand what is being measured. Key decision-makers must be able to connect what is being measured to what they need to know and what decisions they need to make as part of a closed-loop, feedback control process (Frenz et al. 2010).
  3. Measurement must be used to be effective.

Measurement Process Overview

The measurement process as presented here consists of four activities from Practical Software and Systems Measurement (PSM) (2011) and described in (ISO/IEC/IEEE 15939; McGarry et al. 2002):

  1. establish and sustain commitment
  2. plan measurement
  3. perform measurement
  4. evaluate measurement

This approach has been the basis for establishing a common process across the software and systems engineering communities. This measurement approach has been adopted by the Capability Maturity Model Integration (CMMI) measurement and analysis process area (SEI 2006, 10), as well as by international systems and software engineering standards (ISO/IEC/IEEE 15939; ISO/IEC/IEEE 15288, 1). The International Council on Systems Engineering (INCOSE) Measurement Working Group has also adopted this measurement approach for several of their measurement assets, such as the INCOSE SE Measurement Primer (Frenz et al. 2010) and Technical Measurement Guide (Roedler and Jones 2005). This approach has provided a consistent treatment of measurement that allows the engineering community to communicate more effectively about measurement. The process is illustrated in Figure 1 from Roedler and Jones (2005) and McGarry et al. (2002).

Figure 1. Four Key Measurement Process Activities (PSM 2011). Reprinted with permission of Practical Software and Systems Measurement (PSM). All other rights are reserved by the copyright owner.

Establish and Sustain Commitment

This activity focuses on establishing the resources, training, and tools to implement a measurement process and ensure that there is a management commitment to use the information that is produced. Refer to PSM (August 18, 2011) and SPC (2011) for additional detail.

Plan Measurement

This activity focuses on defining measures that provide insight into project or organization information needs. This includes identifying what the decision-makers need to know and when they need to know it, relaying these information needs to those entities in a manner that can be measured, and identifying, prioritizing, selecting, and specifying measures based on project and organization processes (Jones 2003, 15-19). This activity also identifies the reporting format, forums, and target audience for the information provided by the measures.

Here are a few widely used approaches to identify the information needs and derive associated measures, where each can be focused on identifying measures that are needed for SE management:

  • The PSM approach, which uses a set of information categories, measurable concepts, and candidate measures to aid the user in determining relevant information needs and the characteristics of those needs on which to focus (PSM August 18, 2011).
  • The (GQM) approach, which identifies explicit measurement goals. Each goal is decomposed into several questions that help in the selection of measures that address the question and provide insight into the goal achievement (Park, Goethert, and Florac 1996).
  • Software Productivity Center’s (SPC's) 8-step Metrics Program, which also includes stating the goals and defining measures needed to gain insight for achieving the goals (SPC 2011).

The following are good sources for candidate measures that address information needs and measurable concepts/questions:

  • PSM Web Site (PSM 2011)
  • PSM Guide, Version 4.0, Chapters 3 and 5 (PSM 2000)
  • SE Leading Indicators Guide, Version 2.0, Section 3 (Roedler et al. 2010)
  • Technical Measurement Guide, Version 1.0, Section 10 (Roedler and Jones 2005, 1-65)
  • Safety Measurement (PSM White Paper), Version 3.0, Section 3.4 (Murdoch 2006, 60)
  • Security Measurement (PSM White Paper), Version 3.0, Section 7 (Murdoch 2006, 67)
  • Measuring Systems Interoperability, Section 5 and Appendix C (Kasunic and Anderson 2004)
  • Measurement for Process Improvement (PSM Technical Report), version 1.0, Appendix E (Statz 2005)

The INCOSE SE Measurement Primer (Frenz et al. 2010) provides a list of attributes of a good measure with definitions for each attribute; these attributes include relevance, completeness, timeliness, simplicity, cost effectiveness, repeatability, and accuracy. Evaluating candidate measures against these attributes can help assure the selection of more effective measures.

The details of each measure need to be unambiguously defined and documented. Templates for the specification of measures and indicators are available on the PSM website (2011) and in Goethert and Siviy (2004).

Perform Measurement

This activity focuses on the collection and preparation of measurement data, measurement analysis, and the presentation of the results to inform decision makers. The preparation of the measurement data includes verification, normalization, and aggregation of the data, as applicable. Analysis includes estimation, feasibility analysis of plans, and performance analysis of actual data against plans.

The quality of the measurement results is dependent on the collection and preparation of valid, accurate, and unbiased data. Data verification, validation, preparation, and analysis techniques are discussed in PSM (2011) and SEI (2010). Per TL 9000, Quality Management System Guidance, The analysis step should integrate quantitative measurement results and other qualitative project information, in order to provide managers the feedback needed for effective decision making (QuEST Forum 2012, 5-10). This provides richer information that gives the users the broader picture and puts the information in the appropriate context.

There is a significant body of guidance available on good ways to present quantitative information. Edward Tufte has several books focused on the visualization of information, including The Visual Display of Quantitative Information (Tufte 2001).

Other resources that contain further information pertaining to understanding and using measurement results include

  • PSM (2011)
  • ISO/IEC/IEEE 15939, clauses 4.3.3 and 4.3.4
  • Roedler and Jones (2005), sections 6.4, 7.2, and 7.3

Evaluate Measurement

This activity involves the analysis of information that explains the periodic evaluation and improvement of the measurement process and specific measures. One objective is to ensure that the measures continue to align with the business goals and information needs, as well as provide useful insight. This activity should also evaluate the SE measurement activities, resources, and infrastructure to make sure it supports the needs of the project and organization. Refer to PSM (2011) and Practical Software Measurement: Objective Information for Decision Makers (McGarry et al. 2002) for additional detail.

Systems Engineering Leading Indicators

Leading indicators are aimed at providing predictive insight that pertains to an information need. A SE leading indicator is a measure for evaluating the effectiveness of a how a specific activity is applied on a project in a manner that provides information about impacts that are likely to affect the system performance objectives (Roedler et al. 2010). Leading indicators may be individual measures or collections of measures and associated analysis that provide future systems engineering performance insight throughout the life cycle of the system; they support the effective management of systems engineering by providing visibility into expected project performance and potential future states (Roedler et al. 2010).

As shown in Figure 2, a leading indicator is composed of characteristics, a condition, and a predicted behavior. The characteristics and conditions are analyzed on a periodic or as-needed basis to predict behavior within a given confidence level and within an accepted time range into the future. More information is also provided by Roedler et al. (2010).

Figure 2. Composition of a Leading Indicator (Roedler et al. 2010). Reprinted with permission of the International Council on Systems Engineering (INCOSE) and Practical Software and Systems Measurement (PSM). All other rights are reserved by the copyright owner.

Technical Measurement

Technical measurement is the set of measurement activities used to provide information about progress in the definition and development of the technical solution, ongoing assessment of the associated risks and issues, and the likelihood of meeting the critical objectives of the acquirer. This insight helps an engineer make better decisions throughout the life cycle of a system and increase the probability of delivering a technical solution that meets both the specified requirements and the mission needs. The insight is also used in trade-off decisions when performance is not within the thresholds or goals.

Technical measurement includes measures of effectiveness (MOEs), measures of performance (MOPs), and technical performance measures (TPMs) (Roedler and Jones 2005, 1-65). The relationships between these types of technical measures are shown in Figure 3 and explained in the reference for Figure 3. Using the measurement process described above, technical measurement can be planned early in the life cycle and then performed throughout the life cycle with increasing levels of fidelity as the technical solution is developed, facilitating predictive insight and preventive or corrective actions. More information about technical measurement can be found in the NASA Systems Engineering Handbook, System Analysis, Design, Development: Concepts, Principles, and Practices, and the Systems Engineering Leading Indicators Guide (NASA December 2007, 1-360, Section 6.7.2.2; Wasson 2006, Chapter 34; Roedler and Jones 2005).

Figure 3. Relationship of the Technical Measures (Roedler et al 2010). Reprinted with permission of the International Council on Systems Engineering (INCOSE) and Practical Software and Systems Measurement (PSM). All other rights are reserved by the copyright owner.

Service Measurement

The same measurement activities can be applied for service measurement; however, the context and measures will be different. Service providers have a need to balance efficiency and effectiveness, which may be opposing objectives. Good service measures are outcome-based, focus on elements important to the customer (e.g., service availability, reliability, performance, etc.), and provide timely, forward-looking information.

For services, the terms critical success factors (CSF) and key performance indicators (KPI) are used often when discussing measurement. CSFs are the key elements of the service or service infrastructure that are most important to achieve the business objectives. KPIs are specific values or characteristics measured to assess achievement of those objectives.

More information about service measurement can be found in the Service Design and Continual Service Improvement volumes of BMP (2010, 1). More information on service SE can be found in the Service Systems Engineering article.

Linkages to Other Systems Engineering Management Topics

SE measurement has linkages to other SEM topics. The following are a few key linkages adapted from Roedler and Jones (2005):

  • Planning – SE measurement provides the historical data and supports the estimation for, and feasibility analysis of, the plans for realistic planning.
  • Assessment and Control – SE measurement provides the objective information needed to perform the assessment and determination of appropriate control actions. The use of leading indicators allows for early assessment and control actions that identify risks and/or provide insight to allow early treatment of risks to minimize potential impacts.
  • Risk Management – SE risk management identifies the information needs that can impact project and organizational performance. SE measurement data helps to quantify risks and subsequently provides information about whether risks have been successfully managed.
  • Decision Management – SE Measurement results inform decision making by providing objective insight.

Practical Considerations

Key pitfalls and good practices related to SE measurement are described in the next two sections.

Pitfalls

Some of the key pitfalls encountered in planning and performing SE Measurement are provided in Table 1.

Table 1. Measurement Pitfalls. (SEBoK Original)
Name Description
Golden Measures
  • Looking for the one measure or small set of measures that applies to all projects.
  • No one-size-fits-all measure or measurement set exists.
  • Each project has unique information needs (e.g., objectives, risks, and issues).
  • The one exception is that, in some cases with consistent product lines, processes, and information needs, a small core set of measures may be defined for use across an organization.
Single-Pass Perspective
  • Viewing measurement as a single-pass activity.
  • To be effective, measurement needs to be performed continuously, including the periodic identification and prioritization of information needs and associated measures.
Unknown Information Need
  • Performing measurement activities without the understanding of why the measures are needed and what information they provide.
  • This can lead to wasted effort.
Inappropriate Usage
  • Using measurement inappropriately, such as measuring the performance of individuals or makinng interpretations without context information.
  • This can lead to bias in the results or incorrect interpretations.

Good Practices

Some good practices, gathered from the references are provided in Table 2.

Table 2. Measurement Good Practices. (SEBoK Original)
Name Description
Periodic Review
  • Regularly review each measure collected.
Action Driven
  • Measurement by itself does not control or improve process performance.
  • Measurement results should be provided to decision makers for appropriate action.
Integration into Project Processes
  • SE Measurement should be integrated into the project as part of the ongoing project business rhythm.
  • Data should be collected as processes are performed, not recreated as an afterthought.
Timely Information
  • Information should be obtained early enough to allow necessary action to control or treat risks, adjust tactics and strategies, etc.
  • When such actions are not successful, measurement results need to help decision-makers determine contingency actions or correct problems.
Relevance to Decision Makers
  • Successful measurement requires the communication of meaningful information to the decision-makers.
  • Results should be presented in the decision-makers preferred format.
  • Allows accurate and expeditious interpretation of the results.
Data Availability
  • Decisions can rarely wait for a complete or perfect set of data, so measurement information often needs to be derived from analysis of the best available data, complemented by real-time events and qualitative insight (including experience).
Historical Data
  • Use historical data as the basis of plans, measure what is planned versus what is achieved, archive actual achieved results, and use archived data as a historical basis for the next planning effort.
Information Model
  • The information model defined in ISO/IEC/IEEE (2007) provides a means to link the entities that are measured to the associated measures and to the identified information need, and also describes how the measures are converted into indicators that provide insight to decision-makers.

Additional information can be found in the Systems Engineering Measurement Primer, Section 4.2 (Frenz et al. 2010), and INCOSE Systems Engineering Handbook, Section 5.7.1.5 (2012).

References

Works Cited

Frenz, P., G. Roedler, D.J. Gantzer, P. Baxter. 2010. Systems Engineering Measurement Primer: A Basic Introduction to Measurement Concepts and Use for Systems Engineering. Version 2.0. San Diego, CA: International Council on System Engineering (INCOSE). INCOSE‐TP‐2010‐005‐02. Accessed April 13, 2015 at http://www.incose.org/ProductsPublications/techpublications/PrimerMeasurement

INCOSE. 2012. Systems Engineering Handbook: A Guide for System Life Cycle Processes and Activities, version 3.2.2. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2.2.

ISO/IEC/IEEE. 2007. Systems and software engineering - Measurement process. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC), ISO/IEC/IEEE 15939:2007.

ISO/IEC/IEEE. 2015. Systems and Software Engineering -- System Life Cycle Processes. Geneva, Switzerland: International Organisation for Standardisation / International Electrotechnical Commissions / Institute of Electrical and Electronics Engineers. ISO/IEC/IEEE 15288:2015.

Kasunic, M. and W. Anderson. 2004. Measuring Systems Interoperability: Challenges and Opportunities. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU).

McGarry, J., D. Card, C. Jones, B. Layman, E. Clark, J.Dean, F. Hall. 2002. Practical Software Measurement: Objective Information for Decision Makers. Boston, MA, USA: Addison-Wesley.

NASA. 2007. Systems Engineering Handbook. Washington, DC, USA: National Aeronautics and Space Administration (NASA), December 2007. NASA/SP-2007-6105.

Park, R.E., W.B. Goethert, and W.A. Florac. 1996. Goal-Driven Software Measurement – A Guidebook. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU), CMU/SEI-96-BH-002.

PSM. 2011. "Practical Software and Systems Measurement." Accessed August 18, 2011. Available at: http://www.psmsc.com/.

PSM. 2000. Practical Software and Systems Measurement (PSM) Guide, version 4.0c. Practical Software and System Measurement Support Center. Available at: http://www.psmsc.com/PSMGuide.asp.

PSM Safety & Security TWG. 2006. Safety Measurement, version 3.0. Practical Software and Systems Measurement. Available at: http://www.psmsc.com/Downloads/TechnologyPapers/SafetyWhitePaper_v3.0.pdf.

PSM Safety & Security TWG. 2006. Security Measurement, version 3.0. Practical Software and Systems Measurement. Available at: http://www.psmsc.com/Downloads/TechnologyPapers/SecurityWhitePaper_v3.0.pdf.

QuEST Forum. 2012. Quality Management System (QMS) Measurements Handbook, Release 5.0. Plano, TX, USA: Quest Forum.

Roedler, G., D. Rhodes, C. Jones, and H. Schimmoller. 2010. Systems Engineering Leading Indicators Guide, version 2.0. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2005-001-03.

Roedler, G. and C. Jones. 2005. Technical Measurement Guide, version 1.0. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-020-01.

SEI. 2010. "Measurement and Analysis Process Area" in Capability Maturity Model Integrated (CMMI) for Development, version 1.3. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU).

Software Productivity Center, Inc. 2011. Software Productivity Center web site. August 20, 2011. Available at: http://www.spc.ca/

Statz, J. et al. 2005. Measurement for Process Improvement, version 1.0. York, UK: Practical Software and Systems Measurement (PSM).

Tufte, E. 2006. The Visual Display of Quantitative Information. Cheshire, CT, USA: Graphics Press.

Wasson, C. 2005. System Analysis, Design, Development: Concepts, Principles, and Practices. Hoboken, NJ, USA: John Wiley and Sons.

Primary References

Frenz, P., G. Roedler, D.J. Gantzer, P. Baxter. 2010. Systems Engineering Measurement Primer: A Basic Introduction to Measurement Concepts and Use for Systems Engineering. Version 2.0. San Diego, CA: International Council on System Engineering (INCOSE). INCOSE‐TP‐2010‐005‐02. Accessed April 13, 2015 at http://www.incose.org/ProductsPublications/techpublications/PrimerMeasurement

ISO/IEC/IEEE. 2007. Systems and Software Engineering - Measurement Process. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC), ISO/IEC/IEEE 15939:2007.

PSM. 2000. Practical Software and Systems Measurement (PSM) Guide, version 4.0c. Practical Software and System Measurement Support Center. Available at: http://www.psmsc.com.

Roedler, G., D. Rhodes, C. Jones, and H. Schimmoller. 2010. Systems Engineering Leading Indicators Guide, version 2.0. San Diego, CA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2005-001-03.

Roedler, G. and C.Jones. 2005. Technical Measurement Guide, version 1.0. San Diego, CA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-020-01.

Additional References

Kasunic, M. and W. Anderson. 2004. Measuring Systems Interoperability: Challenges and Opportunities. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU).

McGarry, J. et al. 2002. Practical Software Measurement: Objective Information for Decision Makers. Boston, MA, USA: Addison-Wesley

NASA. 2007. NASA Systems Engineering Handbook. Washington, DC, USA: National Aeronautics and Space Administration (NASA), December 2007. NASA/SP-2007-6105.

Park, Goethert, and Florac. 1996. Goal-Driven Software Measurement – A Guidebook. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU), CMU/SEI-96-BH-002.

PSM. 2011. "Practical Software and Systems Measurement." Accessed August 18, 2011. Available at: http://www.psmsc.com/.

PSM Safety & Security TWG. 2006. Safety Measurement, version 3.0. Practical Software and Systems Measurement. Available at: http://www.psmsc.com/Downloads/TechnologyPapers/SafetyWhitePaper_v3.0.pdf.

PSM Safety & Security TWG. 2006. Security Measurement, version 3.0. Practical Software and Systems Measurement. Available at: http://www.psmsc.com/Downloads/TechnologyPapers/SecurityWhitePaper_v3.0.pdf.

SEI. 2010. "Measurement and Analysis Process Area" in Capability Maturity Model Integrated (CMMI) for Development, version 1.3. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU).

Software Productivity Center, Inc. 2011. Software Productivity Center web site. August 20, 2011. Available at: http://www.spc.ca/

Statz, J. 2005. Measurement for Process Improvement, version 1.0. York, UK: Practical Software and Systems Measurement (PSM).

Tufte, E. 2006. The Visual Display of Quantitative Information. Cheshire, CT, USA: Graphics Press.

Wasson, C. 2005. System Analysis, Design, Development: Concepts, Principles, and Practices. Hoboken, NJ, USA: John Wiley and Sons.


< Previous Article | Parent Article | Next Article >
SEBoK v. 2.0, released 1 June 2019