Difference between revisions of "System Realization"

From SEBoK
Jump to navigation Jump to search
 
m (Reverted edits by Dholwell (Talk) to last version by Jsnoderly)
Line 7: Line 7:
 
*[[Verification]]
 
*[[Verification]]
 
*[[Validation]]
 
*[[Validation]]
=Introduction=
+
=3.4.1 Introduction=
 
====The SEBOK divides the traditional life cycle process steps into four stages.  This chapter will discuss the realization stage.  The processes included in realization are those required to build a system, integrate disparate system elements, and ensure that the system both meets the needs of stakeholders and aligns with the requirements identified in the system definition stages.  These processes are not sequential; their iteration and flow are depicted in Figure 1, which also shows how these processes fit within the context of the System Definition and System Deployment and Use knowledge areas.
 
====The SEBOK divides the traditional life cycle process steps into four stages.  This chapter will discuss the realization stage.  The processes included in realization are those required to build a system, integrate disparate system elements, and ensure that the system both meets the needs of stakeholders and aligns with the requirements identified in the system definition stages.  These processes are not sequential; their iteration and flow are depicted in Figure 1, which also shows how these processes fit within the context of the System Definition and System Deployment and Use knowledge areas.
  

Revision as of 20:45, 14 June 2011

Introductory Paragraph(s)

Topics

The topics contained within this knowledge area include:

3.4.1 Introduction

====The SEBOK divides the traditional life cycle process steps into four stages. This chapter will discuss the realization stage. The processes included in realization are those required to build a system, integrate disparate system elements, and ensure that the system both meets the needs of stakeholders and aligns with the requirements identified in the system definition stages. These processes are not sequential; their iteration and flow are depicted in Figure 1, which also shows how these processes fit within the context of the System Definition and System Deployment and Use knowledge areas.

Figure 1.png


[Go to discussion page][[Essentially, the outputs of system definition are used during implementation to create system elements and during integration to provide plans and criteria for combining these elements. The requirements derived as part of system definition are used to verify and validate elements, subsystems, and the overall system. These activities provide feedback into the system design, particularly when problems or challenges are identified. Finally, when the system is considered verified and validated, it will then become an input to system deployment and use. It is important to realize that there is overlap in these activities; they do not have to occur in sequence. The way these activities are performed is dependent upon the life cycle model in use (for additional information on life cycles, please see the Systems Engineering Life Cycles knowledge area (KA)).

The realization processes are designed to ensure that the system will be ready to transition and have the appropriate structure and behavior to enable desired operation and functionality throughout the system’s life span. Both DAU and NASA include “transition” in realization in addition to implementation, integration, verification, and validation. (Prosnik 2010; NASA December 2007, 1-360) However, the SEBoK includes transition in the System Deployment and Use KA.

Topics presented under realization include:

  • • 3.4.2 Fundamentals
  • • 3.4.3 Implementation
  • • 3.4.4 Integration
  • • 3.4.5 Verification
  • • 3.4.6 Validation]]

3.4.2 Fundamentals

[(Figure 2 illustrates a macro view of generic outputs from realization activities when using a Vee life cycle model. The left side of the Vee represents various design activities 'going down' the system.)] File:The V - A Macro View.png

[{The left side of the Vee model demonstrates the development of end-product specifications and design descriptions. In this stage, verification and validation plans are developed which are later used to determine whether realized products are compliant with specifications and stakeholder requirements. Also during this stage, initial specifications become flow-down requirements for lower-level system models. In terms of time frame, these activities are going on ‘early’ in the system’s life cycle. Many of these activities are discussed in the System Definition knowledge area. However, it is important to realize that some of the system realization activities are initiated at the same time as some system definition activities.

The right side of the Vee model, as illustrated in Figure 2, results in system elements that are assembled into end products according to the system model described during the left side of the Vee. Verification and validation activities determine how well the system fulfills the stakeholder requirements and design specifications. These activities should follow the plans developed on the left side of the Vee.

The U.S. Defense Acquisition University (DAU) provides this overview of what occurs during system realization:

"Once the products of all system models have been fully defined, Bottom-Up End Product Realization can be initiated. This begins by applying the Implementation Process to buy, build, code or reuse end products. These implemented end products are verified against their design descriptions and specifications, validated against Stakeholder Requirements and then transitioned to the next higher system model for integration. End products from the Integration Process are successively integrated upward, verified and validated, transitioned to the next acquisition phase or transitioned ultimately as the End Product to the user." (Prosnik 2010)

While the systems engineering technical processes are life cycle processes, the processes are concurrent, and the emphasis of the respective processes depends on the phase and maturity of the design. Figure 3 demonstrates (from left to right) a notional emphasis of the respective processes throughout the systems acquisition life cycle from the perspective of the U.S. Department of Defense (DoD). It is important to note that, from this perspective, these processes do not follow a linear progression. Instead, they are concurrent, with the amount of activity in a given area changing over the system’s life cycle. The red boxes indicate the topics that will be discussed below as part of realization.)]

File:Figure 3.png

3.4.3 Implementation

Implementation uses the structure created during architecture design and the results of system analysis to construct system elements that meet the stakeholder requirements developed in the early life cycle phases. These elements are then integrated to form a complete system. (See Integration, Section 10.4)

3.4.3.1 Definition and Approach

Implementation is the process that actually yields the lowest-level system elements in the system hierarchy (system breakdown structure). The system elements are made, bought, or reused. Production involves the hardware fabrication processes of forming, removing, joining, and finishing; or the software realization processes of coding and testing; or the operational procedures development processes for operators' roles. If implementation involves a production process, a manufacturing system which uses the established technical and management processes may be required.

The purpose of the implementation process is to design and create or fabricate a system element conforming to that element’s design requirements. The element is constructed employing appropriate technologies and industry practices. This process bridges the processes outlined in the System Definition knowledge area and the implementation process (see Section 10.3). This overlap is demonstrated in Figure 4. Though simplistic, this diagram demonstrates that the designs developed during system definition are required inputs to the attainment of system elements, regardless of how that attainment occurs, and that system elements are necessary in order to assemble complete subsystems and systems.

File:Figure 4.png

3.4.3.2 Process Approaches

During the implementation process, engineers apply the design requirements allocated to a system element to design and produce a detailed description. They then fabricate, code, or build each individual element using specified materials, processes, physical or logical arrangements, standards, technologies, and/or information flows outlined in detailed description (drawings or other design documentation). This system element will be verified against the detailed description and validated against the design requirements.

If subsequent verification and validation actions or configuration audits reveal discrepancies, recursive interactions occur with predecessor activities or processes as required to mitigate those discrepancies and to modify, repair, or correct the system element in question.

Figure 5 provides the context for the implementation process from the perspective of DAU.

File:Figure 5.png

The International Council on Systems Engineering (INCOSE) provides a similar, but somewhat more detailed view on the context of implementation, as seen in Figure 6.

File:Figure 6.png

These figures provide a useful overview of the systems engineering community’s perspectives of what is required for implementation and what the general results of implementation may be. These are further supported by the discussion of implementation inputs, outputs, and activities found in the National Aeronautics and Space Association (NASA) Handbook. (NASA December 2007, 1-360) It is important to realize that these views are process-oriented. While this is a useful model, examining implementation only in terms of process can be limiting.

Depending on the technologies and systems chosen when a decision is made to produce a system element, the implementation process outcomes may generate constraints to be applied on the architecture of the higher-level system; those constraints are normally identified as derived technical requirements and added to the set of technical requirements applicable to this higher-level system. The architectural design has to take those constraints into account. (DAU 2010)

If the decision is made to purchase or reuse an existing system element, this has to be identified as a constraint or technical requirement applicable to the architecture of the higher-level system. Conversely, the implementation process may involve some adaptation or adjustments to the system element in order to be integrated into a higher-level system or assembly.

Implementation also involves packaging, handling, and storage, depending on the concerned technologies and where or when the system element needs to be integrated into a higher-level assembly. Developing the supporting documentation for the system element, such as the manuals for operation, maintenance, and/or installation, is also a part of the implementation process. (DAU February 19, 2010)

The system element design requirements and the associated verification and validation criteria are inputs to this process; these inputs come from the architectural design process detailed outputs.

Execution of the implementation process is governed by standards, both industry and government, and the terms of all applicable agreements. This may include conditions for packaging and storage as well as preparation for use activities, such as operator training. In addition, packaging, handling, storage, and transportation (PHS&T) considerations will constrain the implementation activities. For more information, please refer to the discussion of PHS&T in the Deployment and Use knowledge area. In addition, the developing or integrating organization will likely have enterprise-level safety practices and guidelines that must also be considered. Outputs from integration will be utilized in systems integration activities.

In order to perform implementation, the following activities must be completed:

  1. Define the implementation strategy. Implementation process activities begin with detailed design and include developing an Implementation Strategy that defines fabrication and coding procedures, tools and equipment to be used, implementation tolerances, and the means and criteria for auditing configuration of resulting elements to the detailed design documentation. In the case of repeated system element implementations (such as for mass manufacturing or replacement elements), the implementation strategy is defined and refined to achieve consistent and repeatable element production; it is retained in the project decision database for future use. The implementation strategy contains the arrangements for packing store and supply the system element.
  2. Realize the system element. Realize or adapt and produce the concerned system element using the implementation strategy items as defined above. Realization or adaptation is conducted with regard to standards that govern applicable safety, security, privacy, and environmental guidelines or legislation and the practices of the relevant implementation technology. This requires the fabrication of hardware elements, development of software elements, definition of training capabilities and drafting of training documentation, and the training of initial operators and maintainers.
  3. Provide evidence of compliance. Record evidence that the system element meets its design requirements and the associated verification and validation criteria as well as the legislation policy. This requires the conduction of peer reviews and unit testing as well as inspection of operation and maintenance manuals.
  4. Package, store and supply the system element. This should be defined in the implementation strategy.

3.4.3.3 Applicable Methods & Tools

There are many software tools available in the implementation and integration phases. The most basic method would be the use of N-Square diagrams as discussed in Jeff Grady’s book on System Integration. (Grady 1994)

3.4.3.4 Evaluation

Proper implementation evaluation should include testing to determine if the system element, i.e. piece of software, subsystem, or other product, works in its intended use. Testing could include mockups and breadboards as well as modeling and simulation of a prototype or completed pieces of a system. Once this is completed successfully, then the next process would be system integration.

3.4.4 System Integration

3.4.4.1 Introduction, Definition and Purpose

Introduction –System Integration consists of taking delivery of the implemented Components (system elements) which compose the System-of-Interest, assembling these Components together, and performing the Verification Actions in the course of the assembly. The ultimate goal of system integration is to ensure that the individual system elements function properly as a whole and satisfy the specified requirements (design characteristics) of the system. System integration is one part of the realization effort and relates only to developmental items. Integration has not to be confused with the assembly of end products on a production line. To perform the production, the assembly line uses a different order from the one for integration.

Definition and Purpose – System Integration consists of a process that “combines system elements to form complete or partial system configurations in order to create a product specified in the system requirements.” (ISO/IEC 15288 2008, p. 44). The process is extended to any kind of product system, service system and enterprise system. System integration purpose is to prepare the System-of-Interest ready for final validation and transition either for use or for production. The integration consists of assembling progressively aggregates of Components (system elements, sub-systems) that compose the System-of-Interest as architectured during design, and to check correctness of static and dynamic aspects of interfaces between the Components. The Defense Acquisition University provides the following context for integration: The integration process will be used [. . .] for the incorporation of the final system into its operational environment to ensure that the system is integrated properly into all defined external interfaces. The interface management process is particularly important for the success of the integration process, and iteration between the two processes will occur. (DAU February 19, 2010)

The purpose of system integration can be summarized as: (1) completely assemble the System-of-Interest to make sure that the Components are compatible with each others; (2) demonstrate that the aggregates of Components perform the expected functions and performance/effectiveness; (3) detect defects/faults related to design and assembly activities by submitting the aggregates to focused Verification Actions.

Note: In the systems engineering literature, sometimes the term "integration" is used in a larger acceptation than in the present topic. In this larger sense, it concerns the technical effort to simultaneously design and develop the system and the processes for developing the system, through concurrent consideration of all life cycle stages, needs and competences. This approach requires the "integration" of numerous skills, activities or processes.

3.4.4.2 Principles

3.4.4.2.1 Boundary of integration activity

In the present sense, integration is understood as the complete bottom-up branch of the V cycle including the tasks of assembly and the appropriate verification tasks. See Figure 7.

File:Figure 7.png

The assembly activity joins together and physically links the Components. Each Component is individually verified and validated prior to entering integration. Integration then adds the verification activity to the assembly activity excluding the final validation.

The final validation performs operational tests that authorize the transition for use or the transition for production. Remember that system integration only endeavours to obtain preproduction prototypes of the concerned product, service or enterprise. If the product, service or enterprise is delivered as a unique exemplar the final validation activity serves as acceptance for delivery and transfer for use. If the prototype has to be produced in several exemplars, the final validation serves as acceptance to launch their production. The definition of the optimized operations of assembly which will be carried out on a Production Line relates to the Manufacturing Process and not to the Integration Process.

Integration activity can sometimes reveal issues or anomalies that require modifications of the design of the system. Modifying the design is not part of the Integration Process but concerns only the Design Process. Integration only deals with the assembly of the Components and verification of the system against its characteristics as designed.

During the assembly, it is however possible to carry out tasks of finishing touches which require simultaneously several components (e.g. paint the whole of two parts after assembly; calibrate a biochemical component, etc.). These tasks must be planed in the context of integration, and are not carried out on separate components. In any case, they do not include modifications related to design.

3.4.4.2.2 Aggregation of Components

The integration is used to systematically assemble a higher-level Component (system) from implemented lower-level ones (sub-systems and/or system elements). Integration often begins with analysis and simulations (e.g., various types of prototypes) and progresses through increasingly more realistic Components (systems, system elements) until the final product service or enterprise is achieved.

System integration is based on the notion of Aggregate. An Aggregate is a subset of the system made up of several physical Components and Links (indiscriminately system elements or sub-systems) on which a set of Verification Actions is applied. Each aggregate is characterized by a configuration which specifies the components to be physically assembled and their configuration status.

To perform Verification Actions, a Verification Configuration that includes the Aggregate plus Verification Tools is constituted. The Verification Tools are enabling products and can be simulators (simulated components), stubs or caps, activators (launchers, drivers), harness, measuring devices, etc.

3.4.4.2.3 Integration by Level of System

According to the V model (see Figure 2), System Definition (top-down branch) is done by successive levels of decomposition; each level corresponds to physical architecture of components (systems and system elements). The integration (bottom-up branch) consists in following the opposite way of composition level by level.

On a given level, integration is done on the basis of the physical architecture defined during System Definition.

3.4.4.2.4 Integration Strategy

The integration of Components is generally performed according to a predefined strategy. The definition of the integration strategy is based on the architecture of the system and relies on the way the architecture of the system has been designed. The strategy is described in an Integration Plan that defines the configuration of expected Aggregates, the order of assembly of these Aggregates in order to carry out efficient Verification Actions (for example, inspections and/or testing). The integration strategy is thus elaborated starting from the selected Verification & Validation strategy (see section xxxxxx Verification & Validation Processes).

To define an integration strategy one can use one or several possible integration approaches / techniques. Any of these may be used individually or in combination. The selection of integration techniques depends of several factors, in particular the type of system, delivery time, order of delivery, risks, constraints, etc. Each integration technique has strengths and weaknesses which should be considered in the context of the System-of-Interest. Some integration techniques are summarized hereafter. See section 3.4.4.3.5 for some others.

Global integration - Also known as "big-bang integration"; all the delivered components are assembled in only one step.

  • • This technique is simple and does not require simulating the components not being available at that time.
  • • Difficult to detect and localize faults; interface faults are detected late.
  • • Should be reserved for simple systems, with few interactions and few components without technological risks.

Integration "with the stream" - The delivered components are assembled as they become available.

  • • Allow starting the integration quickly.
  • • Complex to implement because of the necessity to simulate the components not yet available. Impossible to control the end to end "functional chains"; so global tests are postponed very late in the schedule.
  • • Should be reserved for well known and controlled systems without technological risks.

Incremental integration - In a predefined order, one or a very few components are added to an already integrated increment of components.

  • • Fast localization of faults: a new fault is usually localized in lately integrated components or dependent of a faulty interface.
  • • Require simulators for absent components. Require many test cases : each component addition requires the verification of the new configuration and regression testing.
  • • Applicable to any type of architecture.

Subsets integration - Components are assembled by subsets, and then subsets are assembled together (a subset is an aggregate); could be called "Functional Chains Integration".

  • • Time saving due to parallel integration of subsets; delivery of partial products is possible. Requires less means and fewer test cases than integration by increments.
  • • Subsets shall be defined during the design.
  • • Applicable to architectures composed of sub-systems.

Top-down integration - Components or aggregates are integrated in their activation or utilization order.

  • • Availability of a skeleton and early detection of architectural faults; definition of test cases close to reality; re-use of tests data sets possible.
  • • Many stubs/caps need to be created; difficult to define test cases of the leaf-components (lowest level).
  • • Mainly used in software domain. Start from the component of higher level; components of lower level are added until leaf-components.

Bottom-up integration - Components or aggregates are integrated in the opposite order of their activation or utilization.

  • • Easy definition of test cases; early detection of faults (usually localized in the leaf-components); reduce the number of simulators to be used. An aggregate can be a sub-system.
  • • Test cases shall be redefined for each step; drivers are difficult to define and realize; components of lower levels are "over-tested"; does not allow to quickly detecting the architectural faults.
  • • Mainly used in software domain and in any kind of system.

Criterion driven integration – The most critical components compared to the selected criterion are first integrated (dependability, complexity, technological innovation, etc.). Criteria are generally related to risks.

  • • Allow to test early and intensively critical components; early verification of design choices.
  • • Tests cases and tests data sets are difficult to define.

Usually, a mixed integration technique is selected as a trade-off between the different techniques listed above, allowing to optimize work and to adapt the process to the system under development. The optimization takes into account the realization time of the components, their delivery scheduled order, their level of complexity, the technical risks, the availability of assembly tools, cost, deadlines, specific personnel capability, etc.

3.4.4.3 Process Approach

3.4.4.3.1 Purpose and Principle of Approach

The purpose of the System Integration Process is to assemble the Components and Links between the components (systems and system elements) in order to obtain the system that is compliant with its physical and functional architecture design.

The activities of the Integration Process and those of the Verification Process fit into each other.

The process is used by any system of any level of the decomposition of the System-of-Interest. It is used iteratively starting from a first aggregate of Components till the complete system; the last "loop" of the process results in the entirely integrated system.

The generic inputs are: the elements of the Functional Architecture (functions, functional interfaces: inputs outputs and control flows); the elements of the Physical Architecture (components, physical interfaces: links and ports); the specified requirements applicable to the design of the concerned system.

The generic outputs are: the Integration Plan containing the integration strategy; the integrated system; the integration means (enabling products as tools and procedures); integration reports, eventually issue/trouble reports and change requests about design.

The outcomes of the Integration Process are used by the Verification Process and by the Validation Process.

3.4.4.3.2 Activities of the Process

Major activities and tasks performed during this process include:

  1. Establish the Integration Plan (this activity is carried out concurrently to design activity of the system) that defines:
    1. The optimized integration strategy: order of aggregates assembly using appropriate integration techniques.
    2. The Verification Actions to be processed for the purpose of integration.
    3. The configurations of the aggregates to be assembled and verified.
    4. The integration means and verification means (dedicated enabling products) that may include Assembly Procedures, Assembly Tools (harness, specific tools), Verification Tools (simulators, stubs/caps, launchers, test benches, devices for measuring, etc.), Verification Procedures.
  2. Obtain the integration means and verification means as defined in the Integration Plan; the acquisition of the means can be done through various ways such as procurement, development, reuse, sub-contracting; usually the acquisition of the complete set of means is a mix of these ways.
  3. Take delivery of each component:
    1. Unpack, and reassemble the component with its accessories.
    2. Check the delivered configuration, conformance of component, compatibility of interfaces, the presence of mandatory documentation.
  4. Assemble the components into aggregates:
    1. Gather the components to be assembled, the integration means (Assembly Tools, Assembly Procedures), and the verification means (Verification Tools, Verification Procedures).
    2. Connect the components on each others to constitute aggregates in the order prescribed by the Integration Plan and in Assembly Procedures using Assembly Tools.
    3. Add or connect the Verification Tools to the aggregates as predefined.
    4. Carry out eventual operations of welding, gluing, drilling, tapping, adjusting, tuning, painting, parametering, etc.
  5. Verify each aggregate:
    1. Check the aggregate is correctly assembled according to established procedures.
    2. Perform the Verification Process that uses Verification Procedures and check that the aggregate shows the right design characteristics / specified requirements.
    3. Record integration results or reports and potential issue reports, change requests, etc.

3.4.4.3.3 Artifacts and Ontology Elements

This process may create several artifacts such as:

  1. Integrated System
  2. Assembly Tool
  3. Assembly Procedure
  4. Integration Plan
  5. Integration Report
  6. Issue / Anomaly / Trouble Report
  7. Change Request (about design)

Table 1.png

The main relationships between ontology elements are presented in Figure 8.

File:Figure 8.png

3.4.4.3.4 Checking and Correctness of Integration

The main items to be checked during the integration process:

  • The Integration Plan respects it's template
  • The expected assembly order (integration strategy) is reaklistic
  • No component and link set out in the System Design Document is forgotten
  • Every interface and interaction between components is verified
  • Assembly Procedures and Assembly Tools are available and validated prior begining the assembly
  • Verification Procedures and Verication Tools are available and validated prior begining the verification
  • Integration reports are recorded

3.4.4.3.5 Methods and Techniques

Integration methods and techniques

Several different approaches are summarized in section 3.4.4.2 that may be used for integration. Other ones exist, in particular for intensive software systems such as Vertical Integration, Horizontal Integration, Star integration described below. Vertical integration is the method used to combine subsystems according to their functionality by creating functional entities also referred to as silos. (Lau 2005, 52) Integration can be performed quickly and will involve only the necessary stakeholders. This speed and limited external involvement generally allow vertical integration to be lower in cost short-term. However, cost-of-ownership may be substantially higher than seen in other methods, because another functional entity must be created for new or enhanced functions. Reusing subsystems to create new functions or improve existing functions is not possible. (Lau 2005, 52)

Horizontal integration or enterprise service bus (ESB)

The method in which a specialized subsystem is dedicated to communication between other subsystems. (Lau 2005, 52) This reduces the number of interfaces, because each subsystem will have one interface: the one linking it to the ESB. The ESB is capable of translating data from one subsystem into a format that can be utilized by another subsystem. This generally cuts integration costs and provides flexibility. It is possible to replace a subsystem with another subsystem that provides similar functionality, but that may use different interfaces. This is completely transparent to all other subsystems. In this instance, the only action required is to implement the new interface between the ESB and the new subsystem. (Lau 2005, 52) The cost of horizontal integration can be misinterpreted or incorrectly estimated, however, if the costs of intermediate data transformation or of shifting responsibility over business logic are not taken into account.

Star integration

An integration method where each subsystem is interconnected to each of the remaining subsystems. When observed from the perspective of the subsystem that is being integrated, the connections are reminiscent of a star. The cost varies, depending upon the interfaces that are exported by subsystems. When the subsystems are exporting similar or heterogeneous interfaces, integration costs can substantially rise. Time and costs needed to integrate the systems increase exponentially when adding additional subsystems, particularly if these subsystems utilize novel interfaces. From the feature perspective, this method often seems preferable, due to the extreme flexibility of the reuse of functionality. (Lau 2005, 52)

Coupling matrix and N-square diagram

One of the most basic methods to define the aggregates and the order of integration would be the use of N-Square diagrams. Jeff Grady’s - System Integration – page 190 – 1994, CRC Press - Boca Raton, Florida, US

In the integration context, the coupling matrices are useful for optimizing the aggregate definition and verification of interfaces:

  • • The integration strategy is defined and optimized by reorganizing the coupling matrix in order to group the components in aggregates minimizing the number of interfaces to be verified between aggregates (see Figure 9).

File:Figure 9.png

  • • When verifying the interactions between aggregates, the matrix is an aid tool for fault detection. If by adding a component to an aggregate, an error is detected, the fault can be either related to the component, or to the aggregate, or to the interfaces. If the fault is related to the aggregate, it can relate to any component or any interface between the components internal to the aggregate.

3.4.4.4 Application to Product systems, Service systems, Enterprise systems

As the nature of implemented Components and Links is different for these types of systems, the Aggregates, the Assembly Tools, the Verification Tools are different. Some integration techniques are more appropriate to types of systems. The following table provides some examples.

File:Table2.png


3.4.4.5 Practical Considerations

Pitfalls

  1. The experience shows that the components always do not arrive in the expected order and the tests never proceed or result as foreseen; so the integration strategy should allow a great flexibility.
  2. The "big-bang" integration technique is not appropriate for a fast detection of faults. It is thus preferable to verify the interfaces progressively all along the integration.
  3. The preparation of the integration activities is planned too late in the project schedule, typically when first components are delivered.

Proven practices:

  1. The development of Assembly Tools and Verification Tools can be as long as the system itself. It should be started as earlier as possible as soon as the preliminary design is nearly frozen.
  2. The development of integration means (Assembly Tools, Verification Tools) can be seen as enabling systems, and so using System Definition and System Realization Processes as described in this SEBoK, and managed as projects. These projects can be led by the project of the corresponding System-of-Interest, but assigned to specific system blocks, or can be subcontracted as separate projects.
  3. A good practice consists in gradually integrating aggregates in order to detect faults more easily. The use of the coupling matrix applies for all strategies and especially for the bottom up integration strategy.
  4. The integration process of complex systems cannot be easily foreseeable and its progress control difficult to observe. This is why, it is recommended to plan integration with specific margins, using flexible techniques, integrating sets by similar technologies.
  5. The Integration Responsible should be part of the Design Team.

3.4.4.6 Primary References related to the topic

INCOSE. 2010. INCOSE systems engineering handbook, version 3.2. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2.

NASA. December 2007. Systems engineering handbook. Washington, D.C.: National Aeronautics and Space Administration (NASA), NASA/SP-2007-6105.

3.4.4.7 Additional References and Readings related to the Topic

DAU. February 19, 2010. Defense acquisition guidebook (DAG). Ft. Belvoir, VA, USA: Defense Acquisition University (DAU)/U.S. Department of Defense.

GRADY. J. O. 1994. System integration, Boca Raton, FL, USA: CRC Press, Inc.

HITCHINS, D. 2009. What are the General Principles Applicable to Systems? Insight. International Council on Systems Engineering.

JACKSON, S. 2010. Architecting Resilient Systems: Accident Avoidance and Survival and Recovery from Disruptions, Hoboken, NJ, USA, John Wiley & Sons.

REASON, J. 1997. Managing the Risks of Organisational Accidents, Aldershot, UK, Ashgate Publishing Limited.


3.4.5 System Verification

3.4.5.1 Introduction, Definition and Purpose

Introduction

Verification is a set of actions used to check the correctness of any element such as a component, a system, a document, a service, a task, a requirement, etc. These actions are planned and carried out throughout the life cycle of the system. Verification is a generic term that needs to be instantiated within the context it occurs.

Verification understood as a process is a transverse activity to every life cycle stage of the system. In particular during the development cycle of the system, the Verification Process is performed in parallel of the System Definition and System Realization processes, and applies onto any activity and product resulting from the activity. The activities of every life cycle process and those of the Verification Process fit into each others. The Integration Process uses intensively the Verification Process. The life cycle processes are not described on a time based basis; the Verification Process does not occur only as a phase at the end of the development; it is performed on an iterative basis on every produced engineering elements.

The term Verification is often associated with the term Validation and understood as a single concept of V & V. Validation is used to ensure that “one is working the right problem” whereas Verification is used to ensure that “one has solved the problem right”. (Martin 1997) The present topic provides an overview of Verification concepts and activities; section 3.4.6 provides an overview of Validation ones.

Definition and Purpose

The purpose of Verification, as a generic action, is to identify the faults/defects introduced at the time of any transformation of inputs into outputs. Verification is used to prove that the transformation was made according to the selected and appropriate methods, techniques, standards or rules. If the verification cannot be performed only on the transformation itself, one uses the outcomes of the transformation to establish the evidence these outcomes own the expected characteristics. The verification is based on tangible evidences; this means based on information whose veracity can be demonstrated, based on factual results obtained by techniques such as inspection, measurement, test, analysis, calculation, etc.

So, verify a system (product, service, enterprise) consists in comparing the realized characteristics or properties of the product, service or enterprise against its expected design characteristics (specified requirements). These design properties or characteristics are either independent of the System Requirements (state of the art), or specific i.e. derived from the System Requirements.

There are several books and standards that provide different definition of Verification. The most and general accepted definitions can be found in [ISO-IEC 12207:2008, ISO-IEC 15288:2008, ISO25000:2005, ISO 9000:2005]:

Verification: confirmation, through the provision of objective evidence, that specified requirements have been fulfilled. With a note added in ISO-IEC 15288: Verification is a set of activities that compares a system or system element against the required characteristics. This may include, but is not limited to, specified requirements, design description and the system itself.

3.4.5.2 Principles

3.4.5.2.1 Concept of Verification Action

Why to verify? – In the context of human realizations, any conscious people know that "error" is part of the thought and the human activity. It is the case with any engineering activity. Studies in human reliability have shown that people trained to a specific operation makes around 10-3 errors per hour in the best case. There is no "dishonor" to track down errors when designing systems for example. On the contrary this can be considered as a sign of maturity. In any activity or outcome of activity, the search of potential errors should not either be neglected by considering that they will not happen or that they should not happen; the consequence of errors can cause extremely significant failures or threats.

Verification Action – The term "verification" is generic and is used in association with other terms to define engineering elements (see section 3.4.5.3.3). So one uses hereafter the term Verification Action to mention an action of verification. A Verification Action is defined then performed.

File:Figure 10.png

The definition of a Verification Action applied to an engineering element includes– see Figure 10:

  • to identify the element on which the Verification Action will be performed,
  • to identify the reference/baseline in order to define the expected result of the Verification Action.

The usage/performance of the Verification Action includes:

  • to obtain a result from the performance of the Verification Action onto the submitted element,
  • to compare the obtained result with the expected result,
  • to deduce a degree of correctness of the element.

What to verify? – Any engineering element can be verified using a specific reference for comparison: Stakeholder Requirement, System Requirement, Function, Component, Document, etc. Examples:

  • Verify a document is to check the application of drafting rules.
  • Verify a Stakeholder Requirement or a System Requirement is to check the application of grammatical rules, and characteristics defined in the Stakeholders Requirements Definition Process and the System Requirements Definition Process such as: necessity, implementation free, unambiguous, consistent, complete, singular, feasible, traceable, verifiable – see section 3.3.4.3.4.
  • Verify the design of a system is to check its functional architecture and physical architecture elements against the characteristics of the outcomes of the Architectural Design Process – see section 3.3.5.3.4.
  • Verify an aggregate for integration is to check in particular every interface and interaction between Components – see section 3.4.4.3.4
  • Verify a Verification Procedure is to check the application of a predefined template and drafting rules.
  • Verify a system (product, service, enterprise) is to check its realized characteristics or properties against its expected design characteristics (specified requirements).

3.4.5.2.2 Verification versus Validation

Source of the terms

From an actual and etymological meaning, the term verification comes from the Latin "verus" – that means truth – and "facere" – that means make/perform. So verification means to prove that something is “true” or correct (a property, a characteristic, etc.). The term validation comes from the Latin "valere" – that means become strong – and has the same root as value. So validation means to prove that something has the right features to produce the expected effects. Verification and Validation in plain English – (Jerome, Lake, INCOSE 1999)

Conceptual differences

The difference between verification and validation is a rather subtle concept. Let's have the example of a chair. For a chair, one Verification Action will consist in measuring its feet in order to make sure that their length is compliant with the manufacturing instructions (dimension of the blueprint). Another Verification Action can be to check that the metal used for conforms to the material imposed in the requirements. A Validation Action will consist in checking that the chair is able to fulfill the mission and the purpose of a chair, this means: somebody must sit down there and to be at ease to eat or write. This example guides to note several significant aspects of verification and validation:

  • Verification Actions give place to binary answers (length of feet); Validation Actions can be the subject of discussions (comfort);
  • Verification relates mainly to one element (the feet), whereas validation relates to a set of components (the chair) and considers this set as a whole;
  • Validation presupposes that Verification Actions have been performed first.
Process similarities and differences

There are similarities between the Verification Process and the Validation Process in term of activities – see section 3.4.5.3.2 and section xxxxx. The techniques used to define and perform the Verification Actions and those for Validation Actions are identical – see section 3.4.5.3.5.

The main differences concern the reference used to check the correctness of an element, and the acceptability of the effective correctness. Within verification, the comparison between the expected result and the obtained result is generally binary; it is true or not. Within validation the result of the comparison may require a judgment of value to accept or not the obtained result compared to a threshold, a limit.

3.4.5.2.3 Integration, Verification, Validation of the system

There is sometimes a misconception that Verification occurs after Integration and before Validation. In most of the cases, it is more appropriate to begin verification activities during development (definition and realization) and to continue them into deployment and use. As shown in Figure 3 above, the U.S. DoD’s conception of verification is that it occurs throughout the system life cycle, though the bulk of activities generally occur during realization. (DAU 2010)

Once the system elements have been realized, their integration to form the complete system is performed. Integration consists to assemble and to perform Verification Action as stated in the Integration Process – see section 3.4.4. A Final Validation activity generally occurs when the system is integrated, but a certain number of Validation Actions are also performed in parallel of the System Integration in order to reduce as much as possible the number of Verification Actions and of Validation Actions while controlling the risks that could be generated if some checks are dropped out. Integration, Verification and Validation are intimately processed together due to the necessity of optimizing the strategy of Verification and Validation and the strategy of Integration.

Applied to the global system, Integration, Verification and Validation ensure together that the system has been built and operates correctly. Verification ensures that the system, as realized, meets its design characteristics (specified requirements) and is properly integrated with interfacing product. Validation ensures that the system satisfies its System Requirements and its Stakeholders Requirements.

3.4.5.3 Process Approach

3.4.5.3.1 Purpose and Principle of the approach

The purpose of the [System] Verification Process is to confirm that the specified design requirements are fulfilled by the system. This process provides the information required to effect the remedial actions that correct non-conformances in the realized system or the processes that act on it. (ISO-IEC 15288:2008) It is possible to generalize the process using an extended purpose as follows: the purpose of the Verification Process applied to any element is to confirm that the applicable design reference is fulfilled by this element.

Each system element, sub-system, and the complete system should be compared against its own design references (specified requirements) – see section 3.4.5.2.1. As stated by Dennis Buede, “verification is the matching of [configuration items], components, sub-systems, and the system to corresponding requirements to ensure that each has been built right.” (Buede 2009) This means that the Verification Process is instantiated as many times as necessary during the global development of the system. The Verification Process occurs at every different level of the system decomposition and as necessary all along the system development.

Because of the generic nature of a process, the Verification Process can be applied to any engineering element that has conducted to the definition and realization of the system elements, the sub-systems and the system itself.

But facing the huge number of potential Verification Actions that may be generated by this normal approach, it is necessary to optimize the verification strategy. This strategy is based on the balance between what should be verified as a must, the constraints such as time, cost, and feasibility of testing that limit naturally the number of Verification Actions, and the risks one accepts dropping out some Verification Actions.

Several approaches exist that may be used for defining the Verification Process. INCOSE defines two main steps: plan and perform the Verification Actions. (INCOSE 2010) NASA has a slightly more detailed approach that includes five main steps: prepare verification, perform verification, analyze outcomes, produce report, and capture work products. (NASA December 2007, 1-360, p. 102)

Any approach may be used, provided that it is appropriate to the scope of the system, the constraints of the project, includes the activities listed above in some way, and is appropriately coordinated with other activities (including System Definition, System Realization, and extension to the rest of the life cycle).

The generic inputs are the baseline references of the submitted element. If the element is a system, the inputs are the functional and physical architectures elements as described in a System Design Document, the design description of the internal interfaces (input/output Flows, Links) to the system and the Interfaces Requirements external to the system.

The generic outputs are the Verification Plan that includes the verification strategy, the selected Verification Actions, the Verification Procedures, the Verification Tools, the verified element or system, the verification reports, the issue/trouble reports and change requests on the design.

3.4.5.3.2 Activities of the Process

Major activities and tasks performed during this process include:

  1. Establish the verification strategy drafted in a Verification Plan (this activity is carried out concurrently to System Definition activities) obtained by the following tasks:
    1. Identify the verification scope in listing as exhaustive as possible the characteristics or properties that should be checked; the number of Verification Actions can be very high;
    2. Identify the constraints according to their origin (technical feasibility, management constraints as cost, time, availability of verification means or qualified personnel, contractual constraints as criticality of the mission) that limit potentially the Verification Actions;
    3. Define the appropriate verification techniques to be applied such as inspection, analysis, simulation, peer-review, testing, etc., depending of the best step of the project to perform every Verification Action according to constraints;
    4. Trade off of what should be verified (scope) taking into account all the constraints or limits and deduce what can be verified; the selection of Verification Actions would be made according to the type of system, objectives of the project, acceptable risks and constraints;
    5. Optimize the verification strategy defining the most appropriate verification technique for every Verification Action, defining the necessary verification means (tools, test-benches, personnel, location, facilities) according to the selected verification technique, scheduling the Verification Actions execution in the project steps or milestones, defining the configuration of the elements submitted to Verification Actions (mainly about testing on physical elements).
  2. Perform the Verification Actions includes the following tasks:
    1. Detail each Verification Action, in particular the expected results, the verification technique to be applied and corresponding means (equipments, resources and qualified personnel);
    2. Acquire the verification means used during the system definition steps (qualified personnel, modeling tools, mocks-up, simulators, facilities); then those during the integration step (qualified personnel, Verification Tools, measuring equipments, facilities, Verification Procedures, etc.);
    3. Carry out the Verification Procedures at the right time, in the expected environment, with the expected means, tools and techniques;
    4. Capture and record the results obtained when performing the Verification Actions using Verification Procedures and means.
  3. Analyze obtained results and compare them to the expected results; record the status compliant or not; generate verification reports and potential issue/trouble reports and change requests on the design as necessary.
  4. 4Control the process includes the following tasks:
    1. Update the Verification Plan according to the progress of the project; in particular the planned Verification Actions can be redefined because of unexpected events (addition, deletion or modification of actions);
    2. Coordinate the verification activities with the project manager for schedule, acquisition of means, personnel and resources, with the designers for issue/trouble/non conformance reports, with configuration manager for versions of physical elements, design baselines, etc.

3.4.5.3.3 Artifacts and Ontology Elements

This process may create several artifacts such as:

  1. Verification Plan (contains in particular the verification strategy)
  2. Verification Matrix (contains for each Verification Action, the submitted element, the applied technique, the step of execution, the system block concerned, the expected result, the obtained result, etc.)
  3. Verification Procedures (describe the Verification Actions to be performed, the Verification Tools needed, the Verification Configuration, resources, personnel, schedule, etc.)
  4. Verification Reports
  5. Verification Tools
  6. Verified element (System)
  7. Issue / Nonconformance / Trouble Reports
  8. Change Requests on design


This process handles the ontology elements of the Table 2.

File:Table3.png

The main relationships between ontology elements are presented in Figure 11.

File:Figure 11.png

Note: "Design Reference" is a generic term; instances depend of the type of submitted engineering elements, for example: specified requirements, description of design characteristics or properties, drafting rules, standards, regulations, etc.

"Any realized engineering element" means for example: a Stakeholder Requirement when it is written, a Function, a Component, a sub-system, a Link, a Document, etc.

3.4.5.3.4 Checking and Correctness of Verification

The main items to be checked during the verification process concern the items produced by the process (we could speak about verification of verification!):

  • The Verification Plan, the Verification Actions, the Verification Procedures, verification reports respect their corresponding template.
  • Every verification activity has been planned, performed, recorded and have generated outcomes as defined in the process description above.

3.4.5.3.5 Methods and Techniques

There are several verification techniques to check that an element or a system conforms to its Design References, or its specified requirements. These techniques are the same as those used for validation, though the application of the techniques may differ slightly - see section 3.4.6 for additional information. In particular the purposes are different; verification is used to detect faults/defects, whereas validation is to prove satisfaction of [system and/or stakeholder] requirements.


  • Inspection – Verification Action technique based on visual or dimensional examination of an element; the verification relies on the human senses or uses simple methods of measurement and handling. Inspection is generally non-destructive, and typically includes the use of sight, hearing, smell, touch, and taste; simple physical manipulation; mechanical and electrical gauging; and measurement. No stimuli (tests) are necessary. The technique is used to check properties or characteristics best determined by observation (e.g. - paint colour, weight, documentation, listing of code, etc.).
  • Analysis - Verification Action technique based on analytical evidence obtained without any intervention on the submitted element using mathematical or probabilistic calculation, logical reasoning (including the theory of predicates), modelling, and/or simulation under defined conditions to show theoretical compliance. Mainly used where testing to realistic conditions cannot be achieved, or is not cost-effective.
  • Analogy or similarity - Verification Action technique based on evidence of similar elements to the submitted element or on experience feedback. It is absolutely necessary to show by prediction that the context is invariant and that the outcomes are transposable (models, investigations, experience feedback, etc.). Similarity can only be used if the submitted element is similar in design, manufacture, and use; equivalent or more stringent Verification Actions were used for the similar element, and the intended operational environment is identical to or less rigorous than the similar element.
  • Demonstration - Verification Action technique used to demonstrate correct operation of the submitted element against operational and observable characteristics without using physical measurements (no or minimal instrumentation or test equipment). Demonstration is sometimes called “field testing”. It uses generally a set of tests selected by the supplier to show that the element response to stimuli is suitable or to show that operators can perform their assigned tasks when using the element. Observations are made and compared with predetermined/expected responses. Demonstration may be appropriate when requirements or specifications are given in statistical terms (e.g., Mean Time To Repair, average power consumption, etc.).
  • Test - Verification Action technique performed onto the submitted element by which functional, measurable characteristics, operability, supportability, or performance capability is quantitatively verified when subjected to controlled conditions that are real or simulated. Testing often uses special test equipment or instrumentation to obtain accurate quantitative data to be analyzed.
  • Sampling – Verification Action technique based on verification of characteristics using samples. The number, tolerance and others characteristics must be specified to be in agreement with the experience feedback.

Note: Demonstration and testing can be functional or structural. Functional demonstration and testing are designed to ensure that correct outputs are produced given specific inputs. For structural demonstration and testing, there are performance, recovery, interface, and stress considerations. These considerations will determine the system’s ability to perform and survive given expected conditions.

3.4.5.4 Application to Product systems, Service systems, Enterprise systems

Because of the generic aspect of the process, this one is applied as defined above. The main difference resides on the detailed implementation of the verification techniques described above.

File:Table 4.png

3.4.5.5 Practical Considerations

Pitfalls
  1. Confusion between verification and validation conducts developers to take the wrong reference / baseline to define verification or validation actions and/or to address the wrong level of granularity (detail level for verification, global level for validation).
  2. One overlooks verification actions because it is impossible to check every characteristic or property of all system elements and of the system in any combination of operational conditions and scenarios. A strategy (justified selection of verification actions against risks) has to be established.
  3. Skip verification activity to save time.
  4. Use only testing as a verification technique. Testing requires checking products and services only when they are implemented. Consider other techniques earlier during design; analysis and inspections are cost effective and allow discovering early potential errors or failures.
  5. Stop the performance of Verification Actions when budget and/or time are consumed. Prefer using criteria such as coverage rates to end verification.
Proven practices:
  1. Considering that a modification at the design step is much less expensive than a modification on the prototype, an effort will be made to plan the Verification Actions as earlier as possible.
  2. Define the criteria ending the Verification Actions - Carrying out Verification Actions without limits generates a risk of drift for costs and deadlines. Modifying and verifying in a non stop cycle until to get a perfect system is the best way to never supply the system. Thus, it is necessary to set limits of cost, time and maximum number of modification loop back for each Verification Action type, ending criteria (percentage of success, error count detected, coverage rate obtained, etc.).
  3. Include the Verification Responsible in the designer team. Or include some designers into the Verification Team.

3.4.5.6 Primary References related to the topic

INCOSE. 2010. INCOSE systems engineering handbook, version 3.2. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2.

ISO/IEC. 2008. Systems and software engineering - system life cycle processes. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electronical Commission (IEC), ISO/IEC 15288:2008 (E).

NASA: . Systems Engineering Handbook. Washington, D.C.: National Aeronautics and Space Administration (NASA), NASA/SP-2007-6105, December 2007

3.4.5.7 Additional References and Readings related to the topic

Verification and Validation in plain English – (Jerome, Lake, INCOSE 1999)

3.4.6 System Valadation

3.4.6.1 Introduction, Definition and Purpose

Introduction

Validation is a set of actions used to check the compliance of any element to its purpose. The elements can be a Component, a system, a document, a service, a task, a System Requirement, etc. These actions are planned and carried out throughout the life cycle of the system. Validation is a generic term that needs to be instantiated within the context it occurs.

Validation understood as a process is a transverse activity to every life cycle stage of the system. In particular during the development cycle of the system, the Validation Process is performed in parallel of the System Definition and System Realization processes, and applies onto any activity and product resulting from this activity. The Validation Process generally occurs at the end of a set of life cycle tasks or activities, and at least at the end of every milestone of a development project.

The Validation Process is not limited to a phase at the end of the development of the system. It might be performed on an iterative basis on every produced engineering element during the development and might begin with the validation of the expressed Stakeholders' Requirements as engineering elements.

The Validation Process applied onto the system when completely integrated is often called Final Validation – see System Integration section 3.4.4.2.1 Figure 7.

Definition and Purpose

The purpose of Validation, as a generic action, is to establish the compliance of any activity output, compared to the inputs of this activity. Validation is used to prove that the transformation of inputs produced the expected, the "right" result.

The validation is based on tangible evidences; this means based on information whose veracity can be demonstrated, based on factual results obtained by techniques or methods such as inspection, measurement, test, analysis, calculation, etc.

So, validate a system (product, service, enterprise) consists in demonstrating that the product, service or enterprise satisfies its System Requirements. System Validation is related first to System Requirements, and eventually to Stakeholders Requirements depending of the contractual practices of the concerned industrial sector. From a purpose and a global point of view, validate a system consists in acquiring confidence in its ability to achieve its intended mission or use under specific operational conditions.

There are several books and standards that provide different definition of Validation. The most and general accepted definition can be found in [ISO-IEC 12207:2008, ISO-IEC 15288:2008, ISO25000:2005, ISO 9000:2005]:

Validation: confirmation, through the provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled. With a note added in ISO 9000:2005: Validation is the set of activities ensuring and gaining confidence that a system is able to accomplish its intended use, goals and objectives (i.e., meet stakeholder requirements) in the intended operational environment.

3.4.6.2 Principles

3.4.6.2.1 Concept of Valadation

Validation Action – The term "validation" is generic and is used in association with other terms to define engineering elements – see section 3.4.6.3.3. So one uses hereafter the term Validation Action to mention an action of validation. A Validation Action is defined then performed. The definition of a Validation Action applied to an engineering element includes – see Figure 12:

  • to identify the element on which the Validation Action will be performed,
  • to identify the reference/baseline in order to define the expected result of the Validation Action.

The performance of the Validation Action includes:

  • to obtain a result from the performance of the Validation Action onto the submitted element,
  • to compare the obtained result with the expected result,
  • to deduce a degree of conformance/compliance of the submitted element,
  • to decide about the acceptability of this conformance/compliance, because sometimes the result of the comparison may require a judgment of value regarding the relevance in the context of use to accept or not the obtained result (generally analyzing it against a threshold, a limit).

Note: If there is uncertainty about the conformance/compliance, the cause could come from ambiguity in the requirements; the typical example is the case of a measure of effectiveness expressed without a "limit of acceptance" (above or below threshold the measure is declared unfulfilled).

File:Figure 12.png

What to validate? – Any engineering element can be validated using a specific reference for comparison: Stakeholder Requirement, System Requirement, Function, Component, Document, etc. Examples:

  • Validate a Stakeholder Requirement is to make sure its content is justified and relevant to stakeholders expectations, expressed in the language of the customer or end user.
  • Validate a System Requirement is to make sure its content translates correctly and/or accurately a Stakeholder Requirement in the language of the supplier.
  • Validate the design of a system (functional and physical architectures) is to demonstrate it satisfies its System Requirements.
  • Validate a system (product, service, enterprise) is to demonstrate that the product, the service, the enterprise satisfies its System Requirements, and/or it’s Stakeholders Requirements.
  • Validate an activity or a task is to make sure its outputs are compliant with its inputs.
  • Validate a Document is to make sure its content is compliant with the inputs of the task that produced this document.
  • Validate a Process is to make sure its outcomes are compliant with its purpose.

3.4.6.2.2 Validation versus Verification

The section 3.4.5.2.2 discusses the fundamental differences between the two concepts and associated processes. The following table provides supplementary and synthetic information to help understanding the differences depending of the point of view. File:Table vvcomparison.png

According to the NASA Systems Engineering Handbook from a process perspective, the product verification and product validation processes may be similar in nature, but the objectives are fundamentally different. (NASA December 2007, 1-360)

3.4.6.2.3 System Validation, Final Validation, Operational Validation

The System Validation concerns the global system (product, service, enterprise) seen as a whole and is based on the totality of the requirements (System Requirements, Stakeholders Requirements). But it is obtained gradually throughout the development stage of the system by pursuing three non exclusive ways:

  • first by cumulating the Verification Actions and Validation Actions results provided by the application of the Verification Process and the Validation Process to every definition element and to every integration element;
  • second by performing final Validation Actions onto the complete integrated system in an industrial environment (as close as possible from the operational environment);

Operational Validation actions relate to the operational mission of the system and relate to the acceptance of the system ready for use or for production. For example, operational Validation Actions will force to show in the operational environment that a vehicle has the expected autonomy (is able to cover a defined distance), can cross obstacles, performs safety scenarios as required, etc. See right part of Figure 14.

3.4.6.2.4 Integration, Verification and Validation level per level

It is impossible to carry out only a single global Validation Action on a complete integrated complex system. The sources of faults/defects could be important and it would be impossible to determine the causes of a non conformance raised during this global check. As generally the System-of-Interest has been decomposed during design in a set of blocks and layers of systems and system elements, thus every sub-system (system, system element) is verified, validated and possibly corrected before being integrated into the parent system block of the higher level, as shown on Figure 13.

File:Figure 13.png As necessary, the sub-systems (systems, system elements) are partially integrated in sub-sets in order to limit the number of properties/characteristics to be verified within a single step - see System Integration section 3.4.4.2. For each level, it is necessary to make sure by a set of final Validation Actions that the features stated at the preceding level are not damaged. Moreover, a compliant result obtained in a given environment (for example: final validation environment) can turn into non compliant if the environment changes (for example: operational validation environment). So, as long as the sub-system is not completely integrated and/or does not operate in the real operational environment, no result must be regarded as definitive.

During modifications made to a sub-system, the temptation is to focus on the new adapted configuration forgetting the environment and the other configurations. However, a modification can have significant consequences on other configurations. Thus, any modification requires regression Verification Actions and Validation Actions (often called Regression Testing).

3.4.6.2.5 Verification Actions and Validation Actions inside and transverse to levels

Inside each level of system decomposition, Verification Actions and Validation Actions are performed during System Definition and System Realization as represented in Figure 14 for the upper levels and in Figure 15 for lower levels. The Stakeholders Requirements Definition and the Operational Validation make the link between two levels of the system decomposition. File:Figure 14.png The Specified Requirements Definition of the system elements and the End products Operational Validation make the link between the two lower levels of the decomposition – see Figure 15.

File:Figure 15.png

Note 1: The two figures above show a perfect dispatching of verification and validation activities on the right part, using the corresponding references provided by the System Definition processes on the left part. Some times in the real practices, the outputs of the Stakeholders Requirements Definition Process are not sufficiently formalized or do not contain sufficient operational scenarios and cannot serve as a reference to define operational Validation Actions (to be performed in the operational environment). In this case, the System Requirements Definition Process outputs may be used in place of.

Note 2: The last level of the system decomposition is dedicated to the Realization of the system elements, and the vocabulary and the number of activities used on the Figure 15 may be different – see Implementation section 3.4.3

3.4.6.2.6 Verification and Validation strategy

The notion of Verification and Validation strategy has been introduced in section 3.4.5.2.3. The difference between verification and validation is especially useful for elaborating the Integration strategy, the Verification strategy, and the Validation strategy. In fact the efficiency of the System Realization is gained optimizing the three strategies together to form what is often called Verification & Validation strategy. The optimization consists to define and to perform the minimum of Verification Actions and Validation Actions but detecting the maximum of errors/faults/defects and getting the maximum of confidence in the use of the product, service or enterprise. Of course the optimization takes into account the potential risks potentially generated if Verification Actions or Validation Actions are dropped out.

3.4.6.2.7 Other similar approaches and adaptation

What is presented in the section above represents the formalized approach that is well implemented in several industrial sectors, and its application has demonstrated the efficiency. Because of history, some industrial sectors may have (or seem having) different practices, but their analysis shows that the principles presented in the present book are applied more or less and that they distinguish actions more or less as identified here. The vocabulary may be a little bit different: terms verification and validation may be inverted or not so well differentiated as explained above, as well as the reference/baselines names used for comparison. Example - An example of application that shows different vocabulary is illustrated on Figure 16 and in text below.

File:Figure 16.png

Validation is a continuous process, starting at the beginning of product development, with the stakeholder requirements (upper part of the V-model) running at each level of the system/product hierarchy to ensure requirement cascading is properly done to all system components (the left part of the V-model), and ending with the verification of the final product (upper part of the right V-cycle) against the stakeholders requirements. Validation and verification are therefore concurrent activities which are embedded in the development cycle and difficult to isolate. For this reason, they are very often grouped together, and generally expressed as “V&V”. The Figure 16 focuses on two major aspects of validation: validation of requirements and validation of the system.

In the present case, the purpose of requirement validation is to ensure that requirements are correct and complete so that the product will meet upper level requirement and user needs. Requirement validation might be done at all levels of the system hierarchy, using a top-down approach. Validation of requirements and assumptions at higher levels serves as a basis for validation at lower levels. Requirement validation shall be performed as soon as a consistent set of requirements has been developed, and is not to be started once the complete set of requirements have been established.

In the present case, system validation will follow a bottom-up approach and will be possible only when all system elements and lower level will have been verified. System (final) validation is therefore the ultimate step of verification, which will demonstrate that the system accomplishes its final purpose as stated in the stakeholders' requirements.

3.4.6.3 Process Approach

====3.4.6.3.1 Purpose and Principle of the approach

The purpose of the [System] Validation Process is to provide objective evidence that the services provided by a system when in use comply with stakeholder requirements, achieving its intended use in its intended operational environment. (ISO/IEC 2008)

This process performs a comparative assessment and confirms that the stakeholders' requirements are correctly defined. Where variances are identified, these are recorded and guide corrective actions. System validation is ratified by stakeholders. (ISO/IEC 2008)

The validation process demonstrates that the realized end product satisfies its stakeholders' (customers and other interested parties) expectations within the intended operational environments, with validation performed by anticipated operators and/or users. (NASA December 2007, 1-360)

It is possible to generalize the process using an extended purpose as follows: the purpose of the Validation Process applied to any element is to demonstrate or prove that this element complies with its applicable requirements achieving its intended use in its intended operational environment.

Each system element, sub-system, and the complete system are compared against their own applicable requirements (System Requirements, Stakeholders' Requirements) – see section 3.4.6.2.1. This means that the Validation Process is instantiated as many times as necessary during the global development of the system. The Validation Process occurs at every different level of the system decomposition and as necessary all along the system development. Because of the generic nature of a process, the Validation Process can be applied to any engineering element that has conducted to the definition and realization of the system elements, the sub-systems and the system itself.

In order to ensure that validation is feasible, the implementation of requirements must be verifiable onto the submitted element. Ensuring that requirements are properly written, i.e. quantifiable, measurable, unambiguous, etc., is essential. In addition, verification/validation requirements are often written in conjunction with Stakeholders and System Requirements and provide the method for demonstrating the implementation of each System Requirement or Stakeholder requirement.

The generic inputs are the baseline references of requirements applicable to the submitted element. If the element is a system, the inputs are the System Requirements and Stakeholders' Requirements.

The generic outputs are the Validation Plan that includes the validation strategy, the selected Validation Actions, the Validation Procedures, the Validation Tools, the validated element or system, the validation reports, the issue/trouble reports and change requests on the requirements or on the product, service or enterprise.

3.4.6.3.2 Activities of the Process

Major activities and tasks performed during this process include:

  1. Establish a validation strategy drafted in a Validation Plan (this activity is carried out concurrently to System Definition activities) obtained by the following tasks:
    1. Identify the validation scope that is represented by the [system and or stakeholders] requirements; normally, every requirement should be checked; the number of Validation Actions can be high;
    2. Identify the constraints according to their origin (technical feasibility, management constraints as cost, time, availability of validation means or qualified personnel, contractual constraints as criticality of the mission) that limit or increase potentially the Validation Actions;
    3. Define the appropriate verification/validation techniques to be applied such as inspection, analysis, simulation, review, testing, etc., depending of the best step of the project to perform every Validation Action according to constraints;
    4. Trade off of what should be validated (scope) taking into account all the constraints or limits and deduce what can be validated objectively; the selection of Validation Actions would be made according to the type of system, objectives of the project, acceptable risks and constraints;
    5. Optimize the validation strategy defining the most appropriate verification/validation technique for every Validation Action, defining the necessary validation means (tools, test-benches, personnel, location, facilities) according to the selected validation technique, scheduling the Validation Actions execution in the project steps or milestones, defining the configuration of the elements submitted to Validation Actions (mainly about testing on physical elements).
  2. Perform the Validation Actions includes the following tasks:
    1. Detail each Validation Action, in particular the expected results, the verification/validation technique to be applied and corresponding means (equipments, resources and qualified personnel);
    2. Acquire the validation means used during the system definition steps (qualified personnel, modeling tools, mocks-up, simulators, facilities); then those during the integration, final and operational steps (qualified personnel, Validation Tools, measuring equipments, facilities, Validation Procedures, etc.);
    3. Carry out the Validation Procedures at the right time, in the expected environment, with the expected means, tools and techniques;
    4. Capture and record the results obtained when performing the Validation Actions using Validation Procedures and means.
  3. Analyze obtained results and compare them to the expected results; decide about the acceptability of the conformance/compliance – see section 3.4.6.2.1; record the decision and the status compliant or not; generate validation reports and potential issue/trouble reports and change requests on the [System or Stakeholder] Requirements as necessary.
  4. Control the process includes the following tasks:
    1. Update the Validation Plan according to the progress of the project; in particular the planned Validation Actions can be redefined because of unexpected events (addition, deletion or modification of actions);
    2. Coordinate the validation activities with the project manager for schedule, acquisition of means, personnel and resources, with the designers for issue/trouble/non conformance reports, with configuration manager for versions of physical elements, design baselines, etc.

3.4.6.3.3 Artifacts and Ontology Elements

This process may create several artifacts such as:

  1. Validation Plan (contains in particular the validation strategy with objectives, constraints, the list of the selected Validation Actions, etc.)
  2. Validation Matrix (contains for each Validation Action, the submitted element, the applied technique / method, the step of execution, the system block concerned, the expected result, the obtained result, etc.)
  3. Validation Procedures (describe the Validation Actions to be performed, the Validation Tools needed, the Validation Configuration, resources, personnel, schedule, etc.)
  4. Validation Reports
  5. Validation Tools
  6. Validated element (system, system element, sub-system, etc.)
  7. Issue / Non Conformance / Trouble Reports
  8. Change Requests on requirement, product, service, enterprise

This process handles the ontology elements of the Table 5

File:Table 5.png

The main relationships between ontology elements are presented in Figure 17.

File:Figure 17.png

====3.4.6.3.4 Checking and Correctness of Validation====

The main items to be checked during the validation process concern the items produced by the validation process (we could speak about verification of validation):

  • The Validation Plan, the Validation Actions, the Validation Procedures, validation reports respect their corresponding template.
  • Every validation activity has been planned, performed, recorded and has generated outcomes as defined in the process description above.

3.4.6.3.5 Methods and Techniques

There are several verification/validation techniques / method to check that an element or a system complies to its [System, Stakeholders] Requirements. These techniques are the same as those used for verification. In particular the purposes are different; verification is used to detect faults/defects, whereas validation is to prove satisfaction of [System and/or Stakeholders] Requirements. Refer to section 3.4.5.3.5.

Validation/ Traceability Matrix – The traceability matrix is introduced in the section xxxx of the Stakeholders Requirements Definition topic. It may also be extended and used to record data such as the Validation Actions list, the selected Verification / validation Technique to verify / validate the implementation of every engineering element (in particular Stakeholder and System Requirement), the expected results, the obtained results when the Validation Action has been performed. The use of such a matrix enables the development team to ensure that the selected Stakeholders' and System Requirements have been verified, or to evaluate the percentage of Validation Actions completed. In addition, the matrix helps to check the performed Validation activities against the planned activities as outlined in the Validation Plan, and finally to ensure that System Validation has been appropriately conducted.

3.4.6.4 Application to Product systems, Service systems, Enterprise systems

See section 3.4.5.4

3.4.6.5 Practical Considerations

Pitfalls encountered with system validation:

  1. A common mistake is to wait until the system has been entirely integrated and tested (design is qualified) to perform any sort of validation. Validation should occur as early as possible in the [product] life cycle. (Martin 1997)
  2. Use only testing as a validation technique. Testing requires checking products and services only when they are implemented. Consider other techniques earlier during design; analysis and inspections are cost effective and allow discovering early potential errors, faults or failures.
  3. Stop the performance of Validation Actions when budget and/or time are consumed. Prefer using criteria such as coverage rates to end validation activity.

Proven practices:

  1. The more the characteristics of an element are verified and validated early in the project,the more the corrections are easy to do and less the error will have consequences on schedule and costs.
  2. It is recommended to start the drafting of the Verification and Validation Plan as soon as the first requirements applicable to the system are known. If the writer of the requirements immediately puts the question to know how to verify/validate whether the future system will answer the requirements, it is possible to:
    1. detect the unverifiable requirements,
    2. anticipate, estimate cost and start the design of verification / validation means (as needed) such as test-benches, simulators, …
    3. avoid cost overruns and schedule slippages.
  3. According to Buede, a requirement is verifiable if a “finite, cost-effective process has been defined to check that the requirement has been attained.” (Buede 2009) Generally, this means that each requirement should be quantitative, measurable, unambiguous, understandable, and testable. It is generally much easier and more cost-effective to ensure that requirements meet these criteria while they are being written. Requirements adjustments made after implementation and/or integration are generally much more costly and may have wide-reaching redesign implications. There are several resources which provide guidance on creating appropriate requirements - see the System Definition knowledge area, Stakeholder Requirements and System Requirements topics for additional information.
  4. It is important to document both the Validation Actions performed and the results obtained. This provides accountability regarding the extent to which system, system elements, subsystems fulfill System Requirements and Stakeholders' Requirements. These data can be used to investigate why the system, system elements, subsystems do not match the requirements and to detect potential faults/defects. When requirements are met, these data may be reported to organization parties. For example, in a safety-critical system, it may be necessary to report the results of safety demonstrations to a Certification organization. Validation results may be reported to the acquirer for contractual aspects, or, to internal company for business purpose.

3.4.6.6 Primary References related to the topic

INCOSE. 2010. INCOSE systems engineering handbook, version 3.2. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2.

ISO/IEC. 2008. Systems and software engineering - system life cycle processes. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electronical Commission (IEC), ISO/IEC 15288:2008 (E).

NASA.. Systems Engineering Handbook. Washington, D.C.: National Aeronautics and Space Administration (NASA), NASA/SP-2007-6105, December 2007.

3.4.6.7 Additional References and Readings related to the topic

3.4.7 Practical Considerations

The following are elements that should be considered when practicing any of the activities discussed as part of system realization:

  • Validation will often involve going back directly to the users to have them perform some sort of acceptance test under their own local conditions.
  • Mixing verification and validation is a common issue. Validation demonstrates that the product, service, enterprise as provided, fulfils its intended use, whereas verification addresses whether a local work product properly reflects its specified requirements. In other words, verification ensures that “one built the system right” whereas validation ensures that “one built the right system.” Validation Actions use the same techniques as the Verification Actions (e.g., test, analysis, inspection, demonstration, or simulation).

Often the end users and other relevant stakeholders are involved in the validation activities. Both validation and verification activities (often) run concurrently and may use portions of the same environment. (SEI 2007)

  • Include identification of the document(s)/drawing(s) to more easily make the comparison between what is required versus what is being inspected.
  • Identify the generic name of the analysis (like Failure Modes Effects Analysis), analytical/computer tools, and/or numeric methods, the source of input data, and how raw data will be analyzed. Ensure agreement with the acquirer that the analysis methods and tools, including simulations, are acceptable for the provision of objective proof of requirements compliance
  • State who the witnesses will be for the purpose of collecting the evidence of success, what general steps will be followed, and what special resources are needed, such as instrumentation, special test equipment or facilities, simulators, specific data gathering, or rigorous analysis of demonstration results.
  • Identify the test facility, test equipment, any unique resource needs and environmental conditions, required qualifications and test personnel, general steps that will be followed, specific data to be collected, criteria for repeatability of collected data, and methods for analyzing the results.

3.4.8 Primary References

Buede, D. M. 2009. The engineering design of systems: Models and methods. 2nd ed. Hoboken, NJ: John Wiley & Sons Inc.

DAU. February 19, 2010. Defense acquisition guidebook (DAG). Ft. Belvoir, VA, USA: Defense Acquisition University (DAU)/U.S. Department of Defense.

ECSS. 6 March 2009. Systems engineering general requirements. Noordwijk, Netherlands: Requirements and Standards Division, European Cooperation for Space Standardization (ECSS), ECSS-E-ST-10C.

INCOSE. 2010. INCOSE systems engineering handbook, version 3.2. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2.

ISO/IEC. 2008. Systems and software engineering - system life cycle processes. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electronical Commission (IEC), ISO/IEC 15288:2008 (E).

NASA. December 2007. Systems engineering handbook. Washington, D.C.: National Aeronautics and Space Administration (NASA), NASA/SP-2007-6105.

SAE International. 1996. Certification considerations for highly-integrated or complex aircraft systems. Warrendale, PA, USA: SAE International, ARP475

3.4.9 Additional References and Readings

DAU. Your acquisition policy and discretionary best practices guide. In Defense Acquisition University (DAU)/U.S. Department of Defense (DoD) [database online]. Ft Belvoir, VA, USA, 2009 Available from https://dag.dau.mil/Pages/Default.aspx (accessed 2010).

Gold-Bernstein, B., and W. A. Ruh. 2004. Enterprise integration: The essential guide to integration solutions. Boston, MA, USA: Addison Wesley Professional.

Grady, J. O. 1994. System integration. Boca Raton, FL, USA: CRC Press, Inc.

Martin, J. N. 1997. Systems engineering guidebook: A process for developing systems and products. 1st ed. Boca Raton, FL, USA: CRC Press.

Prosnik, G. 2010. Materials from "systems 101: Fundamentals of systems engineering planning, research, development, and engineering". DAU distance learning program. eds. J. Snoderly, B. Zimmerman. Ft. Belvoir, VA, USA: Defense Acquisition University (DAU)/U.S. Department of Defense (DoD).

SEI. 2007. Capability maturity model integrated (CMMI) for development, version 1.2, measurement and analysis process area. Pittsburg, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU).

3.4.10 Glossary

3.4.10.1 Acronyms

Acronym Definition DAU U.S. Defense Acquisition University DoD U.S. Department of Defense ESB Enterprise Service Bus INCOSE International Council on Systems Engineering IV&V Integration, Verification, & Validation NASA U.S. National Aeronautics and Space Administration PHS&T Packaging, Handling, Storage, and Transportation SOI System-of-Interest SoS System-of-Systems V&V Verification & Validation

3.4.10.2 Terminology

Aggregate—an aggregate is a subset of the system made up of several physical Components and Links (indiscriminately system elements or sub-systems) on which a set of Verification Actions is applied.

Assembly procedure—an assembly procedure groups a set of elementary assembly actions to build an Aggregate of physical Components and Links.

Assembly tool—an assembly tool is a physical tool used to connect, assemble or link several Components and Links to build Aggregates (specific tool, harness, etc.).

Implementation—the process that actually yields the lowest-level system elements in the system hierarchy (system breakdown structure)

Integration—a process that combines system elements to form complete or partial system configurations in order to create a product specified in the system requirements. (ISO/IEE 2008)

System realization—includes the activities required to build a system, integrate disparate system elements, and ensure that the system both meets the Stakeholders Requirements and System requirements, and aligns with the design properties identified or defined in the System Definition processes.

Validation—the process of ensuring that the system achieved its intended use in its operational environment and conditions.

Validation plan—A document which explains how the validation data will be used to determine that the realized system (product, service, or enterprise) complies with the System Requirements and/or Stakeholders Requirements.

Verification—the process of ensuring that a system is built according to its specified requirements and/or design characteristics.

Validation action—A validation action describes what must be validated (the element as reference), on which element, the expected result, the validation/verification technique to apply, on which level of decomposition.

Vreification action—A verification action describes what must be verified (the element as reference), on which element, the expected result, the verification technique to apply, on which level of decomposition.

Validation configuration—A validation configuration groups all physical elements (system elements, sub-systems, system and Validation Tools) necessary to perform a Validation Procedure.

Verification configuration—A verification configuration groups all physical elements (Aggregates and Verification Tools) necessary to perform a Verification Procedure.

Validation procedure—a validation procedure groups a set of Validation Actions performed together (as a scenario of tests) in a given Validation Configuration.

Verification procedure—a verification procedure groups a set of Verification Actions performed together (as a scenario of tests) in a given Verification Configuration.

Validation tool—a validation tool is a device or physical tool used to perform Validation Procedures (test bench, simulator, cap/stub, launcher, etc.).

Verification tool—a verification tool is a device or physical tool used to perform Verification Procedures (test bench, simulator, cap/stub, launcher, etc.).