Difference between revisions of "System Affordability"

From SEBoK
Jump to navigation Jump to search
Line 23: Line 23:
  
 
==Pitfalls==
 
==Pitfalls==
There are pitfalls for the unwary. There are several hazardous failure modes for autonomous systems. For example, one failure mode is system instability due to positive feedback. An agent will sense a parameter reaching a control limit and give the system a strong push in the other direction, after which the system will rapidly approach the other control limit, causing the agent (or another) to give it an even stronger push in the original direction, and so on. Another is that autonomous agents are frequently self-modifying, which makes their failures difficult to debug when the failures occur after several self-modifications. Another is the well-known weakness of autonomous agents to perform commonsense reasoning regarding why human operators have made system control decisions, and to make the wrong conclusions and resulting decisions about controlling the system. Another potential problem is that multiple agents may make contradictory decisions about controlling the system and lack the ability to understand the contradiction or to negotiate a solution that will resolve it
+
There are pitfalls for the unwary. Autonomous systems experience several hazardous failure modes, including  
 +
*'''system instability due to positive feedback''' — where an agent senses a parameter reaching a control limit and gives the system a strong push in the other direction, causing the system to rapidly approach the other control limit, causing the agent (or another) to give it an even stronger push in the original direction, and so on
 +
*'''self-modifying autonomous agents which fail''' after several self-modifications the failures difficult to debug because of the agent’s state has changed so many times
 +
*'''autonomous agents performing weakly at commonsense reasoning''' about system control decisions by human operators, and so tend to reach wrong conclusions and make wrong decisions about controlling the system   
 +
*'''multiple agents making contradictory decisions''' about controlling the system, and lacking the ability to understand the contradiction or to negotiate a solution to resolve it
  
 
==Practical Considerations==
 
==Practical Considerations==

Revision as of 09:20, 12 September 2012

A system is affordable to the degree that system performance, cost, and schedule constraints are balanced over the system life, while mission needs are satisfied in concert with strategic investment and organizational needs (INCOSE 2011). Design for affordability is the practice of considering affordability as a design characteristic or constraint.

Increasing competitive pressures and the scarcity of resources demand that systems engineering (SE) improve affordability. Several recent initiatives have made affordability their top technical priority. They also call for a high priority to be placed on research into techniques — namely, improved systems autonomy and human performance augmentation — that promise to reduce labor costs, provide more efficient equipment to reduce supply costs, and create adaptable systems whose useful lifetime is extended cost-effectively.

Yet methods for cost and schedule estimation have not changed significantly to address these new challenges and opportunities. There is a clear need for

  • new methods to analyze tradeoffs between cost, schedule, effectiveness, and resilience;
  • new methods to adjust priorities and deliverables to meet budgets and schedules; and
  • more affordable systems development processes.

All of this must be accomplished the context of the rapid changes underway in technology, competition, operational concepts, and workforce characteristics.

Background

Historically, cost and schedule estimation has been decoupled from technical SE tradeoff analyses and decision reviews. Most models and tools focus on evaluating either cost-schedule performance or technical performance, but not the tradeoffs between the two. Meanwhile, organizations and their systems engineers often focus on affordability to minimize acquisition costs. They are then drawn into the easiest-first approaches that yield early successes, at the price of being stuck with brittle, expensive-to-change architectures that increase technical debt and life cycle costs.

Two indications that the need for change is being recognized in systems engineering are that the INCOSE SE Handbook now includes affordability as one of the criteria for evaluating requirements (INCOSE 2011); and, there is a trend in SE towards stronger focus on maintainability, flexibility, and evolution (Blanchard, Verma, and Peterson 1995).

Modularization

A key SE principle in this regard involves modularization of the system’s architecture around its most frequent sources of change (Parnas 1979). Then, when changes are needed, their side effects are contained in a single systems element, rather than rippling across the entire system. This approach creates needs for three further improvements. One method of improvement is to refocus the system requirements, on not only a snapshot of current needs, but also on including the most likely sources of requirements change or evolution requirements. Another method is to monitor and acquire knowledge of the most frequent sources of change to better identify requirements for evolution. A third method is to evaluate the system’s proposed architecture to access how well it will support the evolution requirements, as well as the initial snapshot requirements.

An extension of this approach is to identify families of products or product lines in which the systems engineers identify the commonalities and variability across the product line, and to develop architectures for creating (and evolving) the common elements once with plug-compatible interfaces for inserting the variable elements (Boehm, Lane, and Madachy 2010). This approach has been extended into principles for service-oriented system elements, which are characterized by their inputs, outputs, and assumptions, and which can easily be composed into systems in which the sources of change were not anticipated.

This approach can also be extended into classes of smart or autonomous systems, which include many sensors that identify needed changes, and autonomous agents that can determine and effect such changes in microseconds, or much more rapidly than humans can. Such autonomy can not only reduce reaction time, but also reduces the amount of human labor needed to operate the systems, thus improving affordability.

Pitfalls

There are pitfalls for the unwary. Autonomous systems experience several hazardous failure modes, including

  • system instability due to positive feedback — where an agent senses a parameter reaching a control limit and gives the system a strong push in the other direction, causing the system to rapidly approach the other control limit, causing the agent (or another) to give it an even stronger push in the original direction, and so on
  • self-modifying autonomous agents which fail after several self-modifications — the failures difficult to debug because of the agent’s state has changed so many times
  • autonomous agents performing weakly at commonsense reasoning about system control decisions by human operators, and so tend to reach wrong conclusions and make wrong decisions about controlling the system
  • multiple agents making contradictory decisions about controlling the system, and lacking the ability to understand the contradiction or to negotiate a solution to resolve it

Practical Considerations

Autonomous systems need human supervision, and the humans involved require better methods for trend analysis and visualization of trends (especially, undesired ones).

There is also the need, with autonomous systems, to extend the focus from life cycle costs to total ownership costs, which encompass the costs of failures, including losses in sales, profits, mission effectiveness, or human quality of life. This creates a further need to evaluate affordability in light of the value added by the system under consideration. In principle, this involves evaluating the system’s total cost of ownership with respect to its mission effectiveness and resilience across a number of operational scenarios. However, determining the appropriate scenarios and their relative importance is not easy, particularly for multi-mission systems of systems. Often, the best that can be done involves a mix of scenario evaluation and evaluation of general system attributes, such as cost, schedule, performance, and so on.

As for these system attributes, different success-critical stakeholders will have different preferences, or utility functions, for a given attribute. This makes converging on a mutually satisfactory choice among the candidate system solutions a difficult challenge involving the resolution of the multi-criteria decision analysis (MCDA) problem among the stakeholders (Boehm and Jain 2006). This is a well-known problem with several paradoxes, such as Arrow’s impossibility theorem that describes the inability to guarantee a mutually optimal solution among several stakeholders, and several paradoxes in stakeholder preference aggregation in which different voting procedures produce different winning solutions. Still, groups of stakeholders need to make decisions, and various negotiation support systems enable people to better understand each other’s utility functions and to arrive at mutually satisfactory decisions, in which no one gets everything that they want, but everyone is at least as well off as they are with the current system.

Also see System Analysis for considerations of cost and affordability in the technical design space.

Primary References

Works Cited

Blanchard, B., D. Verma, and E. Peterson. 1995. Maintainability: A Key to Effective Serviceability and Maintenance Management. New York, NY, USA: Wiley and Sons.

Boehm, B., J. Lane, and R. Madachy. 2010. "Valuing System Flexibility via Total Ownership Cost Analysis." Proceedings of the NDIA SE Conference, October 2010, San Diego, CA, USA.

Boehm, B. and Jain, A. 2006. "A Value-Based Theory of Systems Engineering." Proceedings of the International Council on Systems Engineering (INCOSE) International Symposium (IS), July 9-13, 2006, Orlando, FL, USA.

INCOSE. 2011. Systems Engineering Handbook, version 3.2.1: 79. San Diego, CA, USA: International Council on Systems Engineering (INCOSE). INCOSE-TP-2003-002-03.2.

Parnas, D. L. 1979. "Designing Software for Ease of Extension and Contraction." IEEE Transactions on Software Engineering SE-5(2): 128-138. Los Alamitos, California, USA: IEEE Computer Society.

Primary References

INCOSE. 2011. Systems Engineering Handbook, version 3.2.1: 79. San Diego, CA, USA: International Council on Systems Engineering (INCOSE). INCOSE-TP-2003-002-03.2.1.

Blanchard, B., D. Verma, and E. Peterson. 1995. Maintainability: A Key to Effective Serviceability and Maintenance Management. New York, NY, USA: Wiley and Sons.

Parnas, D. L. 1979. "Designing Software for Ease of Extension and Contraction." IEEE Transactions on Software Engineering SE-5(2): 128-138. Los Alamitos, California, USA: IEEE Computer Society.

Additional References

Boehm, B., J. Lane, and R. Madachy. 2010. "Valuing System Flexibility via Total Ownership Cost Analysis." Proceedings of the NDIA SE Conference, October 2010, San Diego, CA, USA.

Boehm, B. and Jain, A. 2006. "A Value-Based Theory of Systems Engineering." Proceedings of the International Council on Systems Engineering (INCOSE) International Symposium (IS), July 9-13, 2006, Orlando, FL, USA.

Kobren, Bill. 2011. "Supportability as an Affordability Enabler: A Critical Fourth Element of Acquisition Success Across the System Life Cycle." Defense AT&L: Better Buying Power. Oct 2011. Accessed 28 August 2012 at http://www.dau.mil/pubscats/ATL%20Docs/Sep-Oct11/Kobren.pdf.

Myers, S.E., P.P. Pandolfini, J. F. Keane, O. Younossi, J. K. Roth, M. J. Clark, D. A. Lehman, and J. A. Dechoretz. 2000. "Evaluating affordability initiatives." Johns Hopkins APL Tech. Dig. 21(3), 426–437.


< Previous Article | Parent Article | Next Article >
SEBoK v. 1.9.1 released 30 September 2018

SEBoK Discussion

Please provide your comments and feedback on the SEBoK below. You will need to log in to DISQUS using an existing account (e.g. Yahoo, Google, Facebook, Twitter, etc.) or create a DISQUS account. Simply type your comment in the text field below and DISQUS will guide you through the login or registration steps. Feedback will be archived and used for future updates to the SEBoK. If you provided a comment that is no longer listed, that comment has been adjudicated. You can view adjudication for comments submitted prior to SEBoK v. 1.0 at SEBoK Review and Adjudication. Later comments are addressed and changes are summarized in the Letter from the Editor and Acknowledgements and Release History.

If you would like to provide edits on this article, recommend new content, or make comments on the SEBoK as a whole, please see the SEBoK Sandbox.

blog comments powered by Disqus