Difference between revisions of "Assessment and Control"

From SEBoK
Jump to navigation Jump to search
m (Text replacement - "SEBoK v. 2.9, released 20 November 2023" to "SEBoK v. 2.10, released 06 May 2024")
 
(71 intermediate revisions by 12 users not shown)
Line 1: Line 1:
The purpose of Systems Engineering Assessment and Control (SEAC) is to provide adequate visibility into the [[Project (glossary)|project’s]] actual technical progress and [[Risk (glossary)|risks]] with respect to the technical [[Plan (glossary)|plans]] (i.e., [[Systems Engineering Plan (SEP) (glossary)|Systems Engineering Management Plan (SEMP)]] and subordinate plans).  The visibility allows the project team to take timely preventive action when trends are recognized or corrective action when performance deviates beyond established thresholds or expected values.   [[Acronyms|SEAC]] includes preparing for and conducting reviews and audits to monitor performance. The results of the reviews and [[Measurement (glossary)|measurement]] analyses are used to identify and record findings/discrepancies and may lead to causal analysis and corrective/preventive action plans. Action plans are implemented, tracked, and monitored to closure.  (NASA 2007, Section 6.7) (SEG-ITS, 2009, Section 3.9.3, 3.9.10) (INCOSE, 2010, Clause 6.2)  (SEI, 2007)
+
----
 +
'''''Lead Authors:''''' ''Ray Madachy, Andy Pickard, Garry Roedler'', '''''Contributing Author:''''' ''Richard Turner''
 +
----
 +
The purpose of systems engineering assessment and control (SEAC) is to provide adequate visibility into the {{Term|Project (glossary)|project’s}} actual technical progress and {{Term|Risk (glossary)|risks}} with respect to the technical {{Term|Plan (glossary)|plans}} (i.e., {{Term|Systems Engineering Plan (SEP) (glossary)|systems engineering management plan}} (SEMP) or {{Term|Systems Engineering Plan (SEP) (glossary)|systems engineering plan}} (SEP) and subordinate plans).  The visibility allows the project team to take timely preventive action when disruptive trends are recognized or corrective action when performance deviates beyond established thresholds or expected values. SEAC includes preparing for and conducting reviews and audits to monitor performance. The results of the reviews and {{Term|Measurement (glossary)|measurement}} analyses are used to identify and record findings/discrepancies and may lead to causal analysis and corrective/preventive action plans. Action plans are implemented, tracked, and monitored to closure.  (NASA 2007, Section 6.7; SEG-ITS, 2009, Section 3.9.3, 3.9.10; INCOSE, 2010, Clause 6.2; SEI, 2007)
  
==SE Assessment and Control Process Overview==
+
==Systems Engineering Assessment and Control Process Overview==
 +
The SEAC process involves determining and initiating the appropriate handling strategies and actions for findings and/or discrepancies that are uncovered in the enterprise, infrastructure, or life cycle activities associated with the project. Analysis of the causes of the findings/discrepancies aids in the determination of appropriate handling strategies. Implementation of approved preventive, corrective, or improvement actions ensures satisfactory completion of the project within planned technical, schedule, and cost objectives. Potential action plans for findings and/or discrepancies are reviewed in the context of the overall set of actions and priorities in order to optimize the benefits to the project and/or organization. Interrelated items are analyzed together to obtain a consistent and cost-effective resolution.
  
The Systems Engineering Assessment and Control process includes determining and initiating appropriate handling strategies and actions for findings and/or discrepancies that are uncovered in the enterprise, infrastructure, or life cycle activities associated with the project.  Analysis of the causes of the findings/discrepancies aids in the determination of appropriate handling strategies.  Implementation of approved preventive, corrective, or improvement actions ensures satisfactory completion of the project within planned technical, schedule, and cost objectives.  Potential action plans for findings and/or discrepancies are reviewed in the context of the overall set of actions and priorities in order to optimize the benefits to the project and/or organization.  Interrelated items are analyzed together to obtain a consistent and cost effective resolution.
+
The SEAC process includes the following steps:
 +
* monitor and review technical performance and resource use against plans
 +
* monitor technical risk, escalate significant risks to the project risk register and seek project funding to execute risk mitigation plans
 +
* hold technical reviews and report outcomes at the project reviews
 +
* analyze issues and determine appropriate actions
 +
* manage actions to closure
 +
* hold a post-delivery assessment (also known as a post-project review) to capture knowledge associated with the project (this may be a separate technical assessment or it may be conducted as part of the project assessment and control process).
  
The [[Acronyms|SE]] assessment and control process includes the following activities:
+
The following activities are normally conducted as part of a project assessment and control process:
*Monitor and review technical performance and resource usage against plan
+
* authorization, release and closure of work
*Monitor technical risk, escalate significant risks to the project risk register and seek project funding to execute risk mitigation plans
+
* monitor project performance and resource usage against plan
*Hold technical reviews and report outcomes at the project reviews
+
* monitor project risk and authorize expenditure of project funds to execute risk mitigation plans
*Analyze issues and determine appropriate actions
+
* hold project reviews
*Manage actions to closure
+
* analyze issues and determine appropriate actions
*Hold a Post Delivery Assessment (also known as a Post Project Review) to capture knowledge associated with the project (this may be a separate technical assessment or it may be conducted as part of the Project Assessment and Control process). 
+
* manage actions to closure
 +
* hold a post-delivery assessment (also known as a post-project review) to capture knowledge associated with the project  
  
The following activities are normally conducted as part of a [[Project (glossary)|Project]] Assessment and Control process
+
Examples of major technical reviews used in SEAC are shown in Table 1 from DAU (2010).
*Authorization, release and closure of work
 
*Monitor project performance and resource usage against plan
 
*Monitor project risk and authorize expenditure of project funds to execute risk mitigation plans
 
*Hold Project reviews
 
*Analyze issues and determine appropriate actions
 
*Manage actions to closure
 
*Hold a Post Delivery Assessment (also known as a Post Project Review) to capture knowledge associated with the Project
 
 
 
Examples of major technical reviews used in SEAC from (DAU, 2010) include:
 
  
 
{|  
 
{|  
|+ '''Major Technical Reviews'''
+
|+ '''Table 1. Major Technical Review Examples (DAU 2012).''' Released by Defense Acquisition University (DAU)/U.S. Department of Defense (DoD).
 
|-
 
|-
 
! Name
 
! Name
 
! Description
 
! Description
 
 
|-
 
|-
 
|Alternative Systems Review
 
|Alternative Systems Review
 
|
 
|
 
A multi-disciplined review to ensure the resulting set of requirements agrees with the customers' needs and expectations.
 
A multi-disciplined review to ensure the resulting set of requirements agrees with the customers' needs and expectations.
 
 
|-
 
|-
|[[Critical Design Review (CDR) (glossary)|Critical Design Review (CDR)]]
+
|{{Term|Critical Design Review (CDR) (glossary)|Critical Design Review (CDR)}}
 
|
 
|
A multi-disciplined review establishing the initial product baseline to ensure that the system under review has a reasonable expectation of satisfying the requirements of the Capability Development Document within the currently allocated budget and schedule.
+
A multi-disciplined review establishing the initial product baseline to ensure that the system under review has a reasonable expectation of satisfying the requirements of the capability development document within the currently allocated budget and schedule.
 
 
 
|-
 
|-
|Functional [[Configuration (glossary)|Configuration]] Audit
+
|Functional Configuration Audit
 
|
 
|
formal examination of the as tested characteristics of a [[Configuration (glossary)|configuration]] item (hardware and software) with the objective of verifying that actual performance complies with design and interface requirements in the functional baseline.
+
Formal examination of the as-tested characteristics of a {{Term|Configuration (glossary)|configuration}} item (hardware and software) with the objective of verifying that actual performance complies with design and interface requirements in the functional baseline.
 
 
 
|-
 
|-
 
|In-Service Review
 
|In-Service Review
 
|
 
|
A multi-disciplined product and process assessment to ensure that the system under review is operationally employed with well-understood and managed risk.  
+
A multi-disciplined product and process assessment that is performed to ensure that the system under review is operationally employed with well-understood and managed risk.  
 
 
 
|-
 
|-
 
| Initial Technical Review
 
| Initial Technical Review
 
|  
 
|  
A multi-disciplined review to support a program's initial Program Objective Memorandum submission.
+
A multi-disciplined review that supports a program's initial program objective memorandum submission.
 
 
 
|-
 
|-
 
|Integrated Baseline Review
 
|Integrated Baseline Review
 
|
 
|
A joint assessment conducted by the government program manager and the contractor to establish the Performance Measurement Baseline.
+
A joint assessment conducted by the government program manager and the contractor to establish the performance measurement baseline.
 
 
 
|-
 
|-
 
|Operational Test Readiness Review
 
|Operational Test Readiness Review
 
|
 
|
A multi-disciplined product and process assessment to ensure that the system can proceed into Initial Operational Test and Evaluation with a high probability of success, and that the system is effective and suitable for service introduction.  
+
A multi-disciplined product and process assessment to ensure that the system can proceed into initial operational test and evaluation with a high probability of success, and also that the system is effective and suitable for service introduction.  
 
 
 
|-
 
|-
 
| Production Readiness Review (PRR)
 
| Production Readiness Review (PRR)
 
 
|
 
|
Examines a program to determine if the design is ready for production and if the prime contractor and major subcontractors have accomplished adequate production planning without incurring unacceptable risks that will breach thresholds of schedule, performance, cost, or other established criteria.  
+
The examination of a program to determine if the design is ready for production and if the prime contractor and major subcontractors have accomplished adequate production planning without incurring unacceptable risks that will breach thresholds of schedule, performance, cost, or other established criteria.  
 
 
 
|-
 
|-
|Physical [[Configuration (glossary)|Configuration]] Audit
+
|Physical Configuration Audit
 
|
 
|
Examines the actual [[Configuration (glossary)|configuration]] of an item being produced around the time of the Full-Rate Production Decision.  
+
An examination of the actual {{Term|Configuration (glossary)|configuration}} of an item being produced around the time of the full-rate production decision.  
 
 
 
|-
 
|-
|[[Preliminary Design Review (PDR) (glossary)|Preliminary Design Review (PDR)]]
+
|{{Term|Preliminary Design Review (PDR) (glossary)|Preliminary Design Review (PDR)}}
 
|
 
|
 
A technical assessment establishing the physically allocated baseline to ensure that the system under review has a reasonable expectation of being judged operationally effective and suitable.  
 
A technical assessment establishing the physically allocated baseline to ensure that the system under review has a reasonable expectation of being judged operationally effective and suitable.  
 
 
|-
 
|-
 
|System Functional Review (SFR)
 
|System Functional Review (SFR)
 
|
 
|
A multi-disciplined review to ensure that the system's functional baseline is established and has a reasonable expectation of satisfying the requirements of the Initial Capabilities Document or draft Capability Development Document within the currently allocated budget and schedule.
+
A multi-disciplined review to ensure that the system's functional baseline is established and has a reasonable expectation of satisfying the requirements of the initial capabilities document or draft capability development document within the currently allocated budget and schedule.
 
 
 
|-
 
|-
 
|System Requirements Review (SRR)
 
|System Requirements Review (SRR)
 
|
 
|
A multi-disciplined review to ensure that the system under review can proceed into initial systems development, and that all system requirements and performance requirements derived from the Initial Capabilities Document or draft Capability Development Document are defined and testable, and are consistent with cost, schedule, risk, technology readiness, and other system constraints.  
+
A multi-disciplined review to ensure that the system under review can proceed into initial systems development and that all system requirements and performance requirements derived from the initial capabilities document or draft capability development document are defined and testable, as well as being consistent with cost, schedule, risk, technology readiness, and other system constraints.  
 
 
 
|-
 
|-
 
|System Verification Review (SVR)
 
|System Verification Review (SVR)
 
|
 
|
A multi-disciplined product and process assessment to ensure the system under review can proceed into Low-Rate Initial Production and full-rate production within cost (program budget), schedule (program schedule), risk, and other system constraints.  
+
A multi-disciplined product and process assessment to ensure the system under review can proceed into low-rate initial production and full-rate production within cost (program budget), schedule (program schedule), risk, and other system constraints.  
 
 
 
|-
 
|-
 
|Technology Readiness Assessment
 
|Technology Readiness Assessment
 
|
 
|
A systematic, metrics-based process that assesses the maturity of critical technology elements, including sustainment drivers.  
+
A systematic, metrics-based process that assesses the maturity of critical technology elements, such as sustainment drivers.  
 
 
 
|-
 
|-
 
|Test Readiness Review (TRR)
 
|Test Readiness Review (TRR)
 
|
 
|
A multi-disciplined review designed to ensure that the subsystem or system under review is ready to proceed into formal test.
+
A multi-disciplined review designed to ensure that the subsystem or system under review is ready to proceed into formal testing.
 
 
 
|}
 
|}
  
 
==Linkages to Other Systems Engineering Management Topics==
 
==Linkages to Other Systems Engineering Management Topics==
The Systems Engineering assessment and control process is closely coupled with the [[Measurement]], [[Planning]], [[Decision Management]], and [[Risk Management]] processes.
+
The SE assessment and control process is closely coupled with the [[Measurement|measurement]], [[Planning|planning]], [[Decision Management|decision management]], and [[Risk Management|risk management]] processes. The [[Measurement|measurement]] process provides indicators for comparing actuals to plans. [[Planning|Planning]] provides estimates and milestones that constitute plans for monitoring as well as the project plan, which uses measurements to monitor progress. [[Decision Management|Decision management]] uses the results of project monitoring as decision criteria for making control decisions.
The [[Measurement]] process provides indicators for comparing actuals to plans. [[Planning]] provides estimates and milestones that constitute plans for monitoring, and the project plan with measures used to monitor progress. [[Decision Management]] uses the results of project monitoring as decision criteria for making control decisions.
 
  
 
==Practical Considerations==
 
==Practical Considerations==
Line 117: Line 101:
 
===Pitfalls===
 
===Pitfalls===
  
Some of the key pitfalls encountered in planning and performing SE Assessment and Control are:
+
Some of the key pitfalls encountered in planning and performing SE assessment and control are shown in Table 2.
  
 
{|  
 
{|  
|-
+
|+'''Table 2. Major Pitfalls with Assessment and Control.''' (SEBoK Original)
 
! Name
 
! Name
 
! Description
 
! Description
 
 
|-
 
|-
| No [[measurement (glossary)|Measurement]]
+
| No Measurement
|  
+
| Since the assessment and control activities are highly dependent on insightful measurement information, it is usually ineffective to proceed independently from the measurement efforts - what you get is what you measure.
*Since the assessment and control activities are highly dependent on insightful measurement information, it is usually ineffective to proceed independent of the measurement efforts - what you get is what you measure.
 
 
 
 
|-
 
|-
 
| "Something in Time" Culture
 
| "Something in Time" Culture
|  
+
| Some things are easier to measure than others - for instance, delivery to cost and schedule. Don't focus on these and neglect harder things to measure like quality of the system. Avoid a "something in time" culture where meeting the schedule takes priority over everything else, but what is delivered is not fit for purpose, resulting in the need to rework the project.
*Some things are easier to measure than others - for instance, delivery to cost and schedule. Don't focus on these and neglect harder things to measure like quality of the system, Avoid a "something in time" culture where meeting the schedule takes priority over everything else, but what is delivered is not fit for purpose and drives rework into the project.
 
 
 
 
|-
 
|-
 
| No Teeth
 
| No Teeth
|  
+
| Make sure that the technical review gates have "teeth". Sometimes the project manager is given authority (or can appeal to someone with authority) to over-ride a gate decision and allow work to proceed, even when the gate has exposed significant issues with the technical quality of the system or associated work products. This is a major risk if the organization is strongly schedule-driven; it can't afford the time to do it right, but somehow it finds the time to do it again (rework).
*Make sure that the technical review gates have "teeth". Sometimes the project manager is given authority (or can appeal to someone with authority) to over-ride a gate decision and allow work to proceed, even when the gate has exposed significant issues with the technical quality of the system or associated work products. This is a major risk if the organization is strongly schedule-driven; it can't afford the time to do it right, but somehow it finds the time to do it again (rework).
 
 
 
 
|-
 
|-
 
| Too Early Baselining  
 
| Too Early Baselining  
|  
+
| Don't baseline requirements or designs too early. Often there is strong pressure to baseline system requirements and designs before they are fully understood or agreed, in order to start subsystem or component development. This just guarantees high levels of rework.   
*Don't baseline requirements or designs too early. Often there is strong pressure to baseline system requirements and designs before they are fully understood or agreed, in order to start subsystem or component development. This just guarantees high levels of rework.   
 
 
 
 
|}
 
|}
  
 
===Good Practices===
 
===Good Practices===
Some good practices gathered from the references are:
+
Some good practices gathered from the references are shown in Table 3.
  
 
{|  
 
{|  
|-
+
|+'''Table 3. Proven Practices with Assessment and Control.''' (SEBoK Original)
 
! Name
 
! Name
 
! Description
 
! Description
 
|-
 
|-
| Independence
+
|Independence
|  
+
|Provide independent (from customer) assessment and recommendations on resources, schedule, technical status, and risk based on experience and trend analysis.
*Provide independent (from customer) assessment and recommendations on resources, schedule, technical status, and risk based on experience and trend analysis.
 
 
|-
 
|-
| Peer Reviews
+
|Peer Reviews
|
+
|Use peer reviews to ensure the quality of a product’s work before they are submitted for gate review.
*Use peer review to ensure the quality of work products before they are submitted for gate review
 
 
|-
 
|-
|Accept [[Uncertainty (glossary)|Uncertainty]]
+
|Accept {{Term|Uncertainty (glossary)|Uncertainty}}
|
+
|Communicate uncertainties in requirements or designs and accept that uncertainty is a normal part of developing a system.
*Communicate uncertainties in requirements or designs and accept that uncertainty is a normal part of developing a system.
 
 
 
 
|-
 
|-
| Risk Mitigation Plans
+
|Risk Mitigation Plans
|
+
|Do not penalize a project at gate review if they admit uncertainty in requirements - ask for their risk mitigation plan to manage the uncertainty.
*Do not penalize a project at gate review if they admit uncertainty in requirements - ask for their risk mitigation plan to manage the uncertainty.
 
 
 
 
|-
 
|-
 
|Just In-Time Baselining
 
|Just In-Time Baselining
|
+
|Baseline requirements and designs only when you need to - when other work is committed based on the stability of the requirement or design. If work must start and the requirement or design is still uncertain, consider how you can build robustness into the system to handle the uncertainty with minimum rework.  
*Baseline requirements and designs only when you need to - when other work is committed based on the stability of the requirement or design. If work has to start and the requirement or design is still uncertain, consider how you can build robustness into the system to handle the uncertainty with minimum rework.  
 
 
|-
 
|-
 
|Communication  
 
|Communication  
 
+
|Document and communicate status findings and recommendations to stakeholders.  
|
 
*Document and communicate status findings and recommendations to stakeholders.  
 
 
 
 
|-
 
|-
 
|Full Visibility  
 
|Full Visibility  
 
+
|Ensure that action items and action-item status, as well as other key status items, are visible to all project participants.
|
 
*Ensure that action items and action-item status, as well as other key status items, are visible to all project participants.
 
 
 
 
|-
 
|-
 
|Leverage Previous Root Cause Analysis
 
|Leverage Previous Root Cause Analysis
 
+
|When performing root cause analysis, take into account the root cause and resolution data documented in previous related findings/discrepancies.
|
 
*When performing root cause analysis, consider root cause and resolution data documented in previous related findings/discrepancies.
 
 
 
 
|-
 
|-
|[[Concurrent (glossary)|Concurrent]]  Management  
+
|Concurrent Management  
 
+
|Plan and perform assessment and control concurrently with the activities for [[Measurement]] and [[Risk Management]].  
|
 
*Plan and perform Assessment and Control concurrently with the activities for [[Measurement]] and [[Risk Management]].  
 
 
 
 
|-
 
|-
 
|Lessons Learned and Post-Mortems  
 
|Lessons Learned and Post-Mortems  
 
+
|Hold post-delivery assessments or post-project reviews to capture knowledge associated with the project – e.g., to augment and improve estimation models, lessons learned databases, gate review checklists, etc.
|
 
*Hold post delivery assessments or post project reviews to capture knowledge associated with the project - for instance, to augment and improve estimation models, lessons learned databases, gate review checklists.
 
 
 
 
|}
 
|}
  
 
+
Additional good practices can be found in INCOSE (2010, Clause 6.2), SEG-ITS (2009, Sections 3.9.3 and 3.9.10), INCOSE (2010, Section 5.2.1.5), and NASA (2007, Section 6.7).
Additional good practices can be found in (INCOSE 2010, Clause 6.2), (SEG-ITS, 2009, Sections 3.9.3 and 3.9.10), (INCOSE, 2010, Section 5.2.1.5), (NASA, 2007, Section 6.7).
 
  
 
==References==  
 
==References==  
  
===Citations===
+
===Works Cited===
Caltrans, and USDOT. 2005. [[Systems Engineering Guidebook for Intelligent Transportation Systems (ITS)]], version 1.1. Sacramento, CA, USA: California Department of Transportation (Caltrans) Division of Reserach & Innovation/U.S. Department of Transportation (USDOT), SEG for ITS 1.1.
+
Caltrans and USDOT. 2005. ''[[Systems Engineering Guidebook for Intelligent Transportation Systems (ITS)]],'' version 1.1. Sacramento, CA, USA: California Department of Transportation (Caltrans) Division of Research & Innovation/U.S. Department of Transportation (USDOT), SEG for ITS 1.1.
  
DAU. February 19, 2010. [[Defense Acquisition Guidebook (DAG)]]. Ft. Belvoir, VA, USA: Defense Acquisition University (DAU)/U.S. Department of Defense.  
+
DAU. 2010. ''[[Defense Acquisition Guidebook (DAG)]]''. Ft. Belvoir, VA, USA: Defense Acquisition University (DAU)/U.S. Department of Defense (DoD).  February 19, 2010.
  
INCOSE. 2011. [[INCOSE Systems Engineering Handbook]]: A Guide for System Life Cycle Processes and Activities, version 3.2.1. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2.1.
+
INCOSE. 2012. ''[[INCOSE Systems Engineering Handbook|Systems Engineering Handbook]]: A Guide for System Life Cycle Processes and Activities''. Version 3.2.2. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2.2.
  
NASA. December 2007. [[NASA Systems Engineering Handbook]]. Washington, D.C.: National Aeronautics and Space Administration (NASA), NASA/SP-2007-6105.  
+
NASA. 2007. ''[[NASA Systems Engineering Handbook|Systems Engineering Handbook]]''. Washington, DC, USA: National Aeronautics and Space Administration (NASA), December 2007. NASA/SP-2007-6105.  
  
SEI. 2007. [[Capability Maturity Model Integrated (CMMI) for Development]], version 1.2, measurement and analysis process area. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU).
+
SEI. 2007. "Measurement and Analysis Process Area," in ''[[Capability Maturity Model Integrated (CMMI) for Development]],'' version 1.2. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU).
  
 
===Primary References===
 
===Primary References===
 +
Caltrans and USDOT. 2005. ''[[Systems Engineering Guidebook for Intelligent Transportation Systems (ITS)],]'' version 1.1. Sacramento, CA, USA: California Department of Transportation (Caltrans) Division of Research & Innovation/U.S. Department of Transportation (USDOT), SEG for ITS 1.1.
  
DAU. February 19, 2010. [[Defense Acquisition Guidebook (DAG)]]. Ft. Belvoir, VA, USA: Defense Acquisition University (DAU)/U.S. Department of Defense.
+
DAU. 2010. ''[[Defense Acquisition Guidebook (DAG)]]''. Ft. Belvoir, VA, USA: Defense Acquisition University (DAU)/U.S. Department of Defense (DoD). February 19, 2010.
 
 
INCOSE. 2011. [[INCOSE Systems Engineering Handbook]]: A Guide for System Life Cycle Processes and Activities, version 3.2.1. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2.1.
 
  
NASA. December 2007. [[NASA Systems Engineering Handbook]]. Washington, D.C., USA: National Aeronautics and Space Administration (NASA), NASA/SP-2007-6105.  
+
INCOSE. 2012. ''[[INCOSE Systems Engineering Handbook|Systems Engineering Handbook]]: A Guide for System Life Cycle Processes and Activities''. Version 3.2.2. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2.2.
  
SEI. 2007. [[Capability Maturity Model Integrated (CMMI) for Development]], version 1.2, measurement and analysis process area. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU).
+
NASA. 2007. ''[[NASA Systems Engineering Handbook|Systems Engineering Handbook]]''. Washington, DC, USA: National Aeronautics and Space Administration (NASA), December 2007. NASA/SP-2007-6105.
  
Caltrans, and USDOT. 2005. [[Systems Engineering Guidebook for Intelligent Transportation Systems (ITS)]], version 1.1. Sacramento, CA, USA: California Department of Transportation (Caltrans) Division of Reserach & Innovation/U.S. Department of Transportation (USDOT), SEG for ITS 1.1.
+
SEI. 2007. "Measurement and Analysis Process Area," in ''[[Capability Maturity Model Integrated (CMMI) for Development]],'' version 1.2. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU).
  
 
===Additional References===
 
===Additional References===
  
ISO/IEC/IEEE. 2009. [[ISO/IEC/IEEE 16326|Systems and software engineering - life cycle processes - project management]]. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electronical Commission (IEC)/Institute of Electrical and Electronics Engineers (IEEE), [[ISO/IEC/IEEE 16326]]:2009(E).
+
ISO/IEC/IEEE. 2009. ''ISO/IEC/IEEE 16326|Systems and Software Engineering - Life Cycle Processes - Project Management.'' Geneva, Switzerland: International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC)/Institute of Electrical and Electronics Engineers (IEEE), ISO/IEC/IEEE 16326:2009(E).
 
 
====Article Discussion====
 
 
 
[[{{TALKPAGENAME}}|[Go to discussion page]]]
 
<center>[[Planning|<- Previous Article]] | [[Systems Engineering Management|Parent Article]] | [[Risk Management|Next Article ->]]</center>
 
 
 
==Signatures==
 
--[[User:Groedler|Groedler]] 01:16, 30 August 2011 (UTC)
 
  
--[[User:Dholwell|Dholwell]] 14:32, 2 September 2011 (UTC) core edit
+
----
 +
<center>[[Technical Planning|< Previous Article]] |  [[Systems Engineering Management|Parent Article]]  |  [[Decision Management|Next Article >]]</center>
  
--[[User:Rturner|Rturner]] 13:08, 9 September 2011 (UTC) tech edit
 
 
[[Category: Part 3]][[Category:Topic]]
 
[[Category: Part 3]][[Category:Topic]]
 +
[[Category:Systems Engineering Management]]
 +
<center>'''SEBoK v. 2.10, released 06 May 2024'''</center>

Latest revision as of 22:05, 2 May 2024


Lead Authors: Ray Madachy, Andy Pickard, Garry Roedler, Contributing Author: Richard Turner


The purpose of systems engineering assessment and control (SEAC) is to provide adequate visibility into the project’sproject’s actual technical progress and risksrisks with respect to the technical plansplans (i.e., systems engineering management plansystems engineering management plan (SEMP) or systems engineering plansystems engineering plan (SEP) and subordinate plans). The visibility allows the project team to take timely preventive action when disruptive trends are recognized or corrective action when performance deviates beyond established thresholds or expected values. SEAC includes preparing for and conducting reviews and audits to monitor performance. The results of the reviews and measurementmeasurement analyses are used to identify and record findings/discrepancies and may lead to causal analysis and corrective/preventive action plans. Action plans are implemented, tracked, and monitored to closure. (NASA 2007, Section 6.7; SEG-ITS, 2009, Section 3.9.3, 3.9.10; INCOSE, 2010, Clause 6.2; SEI, 2007)

Systems Engineering Assessment and Control Process Overview

The SEAC process involves determining and initiating the appropriate handling strategies and actions for findings and/or discrepancies that are uncovered in the enterprise, infrastructure, or life cycle activities associated with the project. Analysis of the causes of the findings/discrepancies aids in the determination of appropriate handling strategies. Implementation of approved preventive, corrective, or improvement actions ensures satisfactory completion of the project within planned technical, schedule, and cost objectives. Potential action plans for findings and/or discrepancies are reviewed in the context of the overall set of actions and priorities in order to optimize the benefits to the project and/or organization. Interrelated items are analyzed together to obtain a consistent and cost-effective resolution.

The SEAC process includes the following steps:

  • monitor and review technical performance and resource use against plans
  • monitor technical risk, escalate significant risks to the project risk register and seek project funding to execute risk mitigation plans
  • hold technical reviews and report outcomes at the project reviews
  • analyze issues and determine appropriate actions
  • manage actions to closure
  • hold a post-delivery assessment (also known as a post-project review) to capture knowledge associated with the project (this may be a separate technical assessment or it may be conducted as part of the project assessment and control process).

The following activities are normally conducted as part of a project assessment and control process:

  • authorization, release and closure of work
  • monitor project performance and resource usage against plan
  • monitor project risk and authorize expenditure of project funds to execute risk mitigation plans
  • hold project reviews
  • analyze issues and determine appropriate actions
  • manage actions to closure
  • hold a post-delivery assessment (also known as a post-project review) to capture knowledge associated with the project

Examples of major technical reviews used in SEAC are shown in Table 1 from DAU (2010).

Table 1. Major Technical Review Examples (DAU 2012). Released by Defense Acquisition University (DAU)/U.S. Department of Defense (DoD).
Name Description
Alternative Systems Review

A multi-disciplined review to ensure the resulting set of requirements agrees with the customers' needs and expectations.

Critical Design Review (CDR)Critical Design Review (CDR)

A multi-disciplined review establishing the initial product baseline to ensure that the system under review has a reasonable expectation of satisfying the requirements of the capability development document within the currently allocated budget and schedule.

Functional Configuration Audit

Formal examination of the as-tested characteristics of a configurationconfiguration item (hardware and software) with the objective of verifying that actual performance complies with design and interface requirements in the functional baseline.

In-Service Review

A multi-disciplined product and process assessment that is performed to ensure that the system under review is operationally employed with well-understood and managed risk.

Initial Technical Review

A multi-disciplined review that supports a program's initial program objective memorandum submission.

Integrated Baseline Review

A joint assessment conducted by the government program manager and the contractor to establish the performance measurement baseline.

Operational Test Readiness Review

A multi-disciplined product and process assessment to ensure that the system can proceed into initial operational test and evaluation with a high probability of success, and also that the system is effective and suitable for service introduction.

Production Readiness Review (PRR)

The examination of a program to determine if the design is ready for production and if the prime contractor and major subcontractors have accomplished adequate production planning without incurring unacceptable risks that will breach thresholds of schedule, performance, cost, or other established criteria.

Physical Configuration Audit

An examination of the actual configurationconfiguration of an item being produced around the time of the full-rate production decision.

Preliminary Design Review (PDR)Preliminary Design Review (PDR)

A technical assessment establishing the physically allocated baseline to ensure that the system under review has a reasonable expectation of being judged operationally effective and suitable.

System Functional Review (SFR)

A multi-disciplined review to ensure that the system's functional baseline is established and has a reasonable expectation of satisfying the requirements of the initial capabilities document or draft capability development document within the currently allocated budget and schedule.

System Requirements Review (SRR)

A multi-disciplined review to ensure that the system under review can proceed into initial systems development and that all system requirements and performance requirements derived from the initial capabilities document or draft capability development document are defined and testable, as well as being consistent with cost, schedule, risk, technology readiness, and other system constraints.

System Verification Review (SVR)

A multi-disciplined product and process assessment to ensure the system under review can proceed into low-rate initial production and full-rate production within cost (program budget), schedule (program schedule), risk, and other system constraints.

Technology Readiness Assessment

A systematic, metrics-based process that assesses the maturity of critical technology elements, such as sustainment drivers.

Test Readiness Review (TRR)

A multi-disciplined review designed to ensure that the subsystem or system under review is ready to proceed into formal testing.

Linkages to Other Systems Engineering Management Topics

The SE assessment and control process is closely coupled with the measurement, planning, decision management, and risk management processes. The measurement process provides indicators for comparing actuals to plans. Planning provides estimates and milestones that constitute plans for monitoring as well as the project plan, which uses measurements to monitor progress. Decision management uses the results of project monitoring as decision criteria for making control decisions.

Practical Considerations

Key pitfalls and good practices related to SEAC are described in the next two sections.

Pitfalls

Some of the key pitfalls encountered in planning and performing SE assessment and control are shown in Table 2.

Table 2. Major Pitfalls with Assessment and Control. (SEBoK Original)
Name Description
No Measurement Since the assessment and control activities are highly dependent on insightful measurement information, it is usually ineffective to proceed independently from the measurement efforts - what you get is what you measure.
"Something in Time" Culture Some things are easier to measure than others - for instance, delivery to cost and schedule. Don't focus on these and neglect harder things to measure like quality of the system. Avoid a "something in time" culture where meeting the schedule takes priority over everything else, but what is delivered is not fit for purpose, resulting in the need to rework the project.
No Teeth Make sure that the technical review gates have "teeth". Sometimes the project manager is given authority (or can appeal to someone with authority) to over-ride a gate decision and allow work to proceed, even when the gate has exposed significant issues with the technical quality of the system or associated work products. This is a major risk if the organization is strongly schedule-driven; it can't afford the time to do it right, but somehow it finds the time to do it again (rework).
Too Early Baselining Don't baseline requirements or designs too early. Often there is strong pressure to baseline system requirements and designs before they are fully understood or agreed, in order to start subsystem or component development. This just guarantees high levels of rework.

Good Practices

Some good practices gathered from the references are shown in Table 3.

Table 3. Proven Practices with Assessment and Control. (SEBoK Original)
Name Description
Independence Provide independent (from customer) assessment and recommendations on resources, schedule, technical status, and risk based on experience and trend analysis.
Peer Reviews Use peer reviews to ensure the quality of a product’s work before they are submitted for gate review.
Accept UncertaintyUncertainty Communicate uncertainties in requirements or designs and accept that uncertainty is a normal part of developing a system.
Risk Mitigation Plans Do not penalize a project at gate review if they admit uncertainty in requirements - ask for their risk mitigation plan to manage the uncertainty.
Just In-Time Baselining Baseline requirements and designs only when you need to - when other work is committed based on the stability of the requirement or design. If work must start and the requirement or design is still uncertain, consider how you can build robustness into the system to handle the uncertainty with minimum rework.
Communication Document and communicate status findings and recommendations to stakeholders.
Full Visibility Ensure that action items and action-item status, as well as other key status items, are visible to all project participants.
Leverage Previous Root Cause Analysis When performing root cause analysis, take into account the root cause and resolution data documented in previous related findings/discrepancies.
Concurrent Management Plan and perform assessment and control concurrently with the activities for Measurement and Risk Management.
Lessons Learned and Post-Mortems Hold post-delivery assessments or post-project reviews to capture knowledge associated with the project – e.g., to augment and improve estimation models, lessons learned databases, gate review checklists, etc.

Additional good practices can be found in INCOSE (2010, Clause 6.2), SEG-ITS (2009, Sections 3.9.3 and 3.9.10), INCOSE (2010, Section 5.2.1.5), and NASA (2007, Section 6.7).

References

Works Cited

Caltrans and USDOT. 2005. Systems Engineering Guidebook for Intelligent Transportation Systems (ITS), version 1.1. Sacramento, CA, USA: California Department of Transportation (Caltrans) Division of Research & Innovation/U.S. Department of Transportation (USDOT), SEG for ITS 1.1.

DAU. 2010. Defense Acquisition Guidebook (DAG). Ft. Belvoir, VA, USA: Defense Acquisition University (DAU)/U.S. Department of Defense (DoD). February 19, 2010.

INCOSE. 2012. Systems Engineering Handbook: A Guide for System Life Cycle Processes and Activities. Version 3.2.2. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2.2.

NASA. 2007. Systems Engineering Handbook. Washington, DC, USA: National Aeronautics and Space Administration (NASA), December 2007. NASA/SP-2007-6105.

SEI. 2007. "Measurement and Analysis Process Area," in Capability Maturity Model Integrated (CMMI) for Development, version 1.2. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU).

Primary References

Caltrans and USDOT. 2005. [[Systems Engineering Guidebook for Intelligent Transportation Systems (ITS)],] version 1.1. Sacramento, CA, USA: California Department of Transportation (Caltrans) Division of Research & Innovation/U.S. Department of Transportation (USDOT), SEG for ITS 1.1.

DAU. 2010. Defense Acquisition Guidebook (DAG). Ft. Belvoir, VA, USA: Defense Acquisition University (DAU)/U.S. Department of Defense (DoD). February 19, 2010.

INCOSE. 2012. Systems Engineering Handbook: A Guide for System Life Cycle Processes and Activities. Version 3.2.2. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2.2.

NASA. 2007. Systems Engineering Handbook. Washington, DC, USA: National Aeronautics and Space Administration (NASA), December 2007. NASA/SP-2007-6105.

SEI. 2007. "Measurement and Analysis Process Area," in Capability Maturity Model Integrated (CMMI) for Development, version 1.2. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU).

Additional References

ISO/IEC/IEEE. 2009. ISO/IEC/IEEE 16326|Systems and Software Engineering - Life Cycle Processes - Project Management. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC)/Institute of Electrical and Electronics Engineers (IEEE), ISO/IEC/IEEE 16326:2009(E).


< Previous Article | Parent Article | Next Article >
SEBoK v. 2.10, released 06 May 2024