Difference between revisions of "Virginia Class Submarine"

From SEBoK
Jump to navigation Jump to search
Line 86: Line 86:
  
 
----
 
----
<center>[[UK West Coast Route Modernisation Project|< Previous Article]] | [[Implementation Examples|Parent Article]] | [[Miniature Seeker Technology Integration Spacecraft|Next Article >]]</center>
+
<center>[[UK West Coast Route Modernisation Project|< Previous Article]] | [[Implementation Examples|Parent Article]] | [[Applying a Model-Based Approach to Support Requirements Analysis on the Thirty-Meter Telescope|Next Article >]]</center>
  
 
[[Category:Part 7]][[Category:Example]]
 
[[Category:Part 7]][[Category:Example]]
  
 
{{DISQUS}}
 
{{DISQUS}}

Revision as of 17:25, 30 November 2017

This example was developed as a SE example directly for the SEBoK. it describes the Virginia Class submarine sonar system project. In particular the approach taken to the development of a sonar system architecture and how this helped in the integration of commercial off the shelf products.

Description

Prior to the Virginia class submarine, sonar systems were comprised of proprietary components and interfaces. However, in the mid-1990s the United States government transitioned to the use of commercially developed products - or commercial off the shelf (COTS) products - as a cost-saving measure to reduce the escalating costs associated with proprietary-based research and development. The Virginia class submarine system design represented a transition to COTS-based parts and initiated a global change in architectural approaches adopted by the sonar community. The lead ship of the program, Virginia, reduced the number of historically procured parts for nuclear submarines by 60% with the use of standardization. The Virginia class submarine sonar system architecture has improved modularity, commonality, standardization, and reliability, maintainability and testability (RMT) over historical sonar systems.

Architectural Approach: Standardization

Based on the new architectural approach and the success of the transition, system architecture experts developed an initial set of architecture evaluation metrics:

  • Commonality
    • Physical commonality (within the system)
      • Hardware (HW) commonality (e.g., the number of unique line replaceable units, fasteners, cables, and unique standards implemented)
      • Software (SW) commonality (e.g., the number of unique SW packages implemented, languages, compilers, average SW instantiations, and unique standards implemented)
    • Physical familiarity (with other systems)
      • Percentage of vendors and subcontractors known
      • Percentage of HW and SW technology known
    • Operational commonality
      • Percentage of operational functions which are automated
      • Number of unique skill codes required
      • Estimated operational training time (e.g., initial and refresh from previous system)
      • Estimated maintenance training time (e.g., initial and refresh from previous system)
  • Modularity
    • Physical modularity (e.g., ease of system element or operating system upgrade)
    • Functional modularity (e.g., ease of adding new functionality or upgrading existing functionality)
    • Orthogonality
      • Level to which functional requirements are fragmented across multiple processing elements and interfaces
      • Level to which throughput requirements span across interfaces
      • Level to which common specifications are identified
    • Abstraction (i.e., the level to which the system architecture provides an option for information hiding)
    • Interfaces
      • Number of unique interfaces per system element
      • Number of different networking protocols
      • Explicit versus implicit interfaces
      • Level to which the architecture includes implicit interfaces
      • Number of cables in the system
  • Standards-based openness
    • Interface standards
      • Ratio of the number of interface standards to the number of interfaces
      • Number of vendors for products based on standards
      • Number of business domains that apply/use the standard (e.g., aerospace, medical, and telecommunications)
      • Standard maturity
    • Hardware standards
      • Ratio of the number of form factors to the number of line replaceable units (LRUs)
      • Number of vendors for products based on standards
      • Standard maturity
    • Software standards
      • Number of proprietary and unique operating systems
      • Number of non-standard databases
      • Number of proprietary middle-ware
      • Number of non-standard languages
    • Consistency orientation
      • Common guidelines for implementing diagnostics and performance monitor/fault location (PM/FL)
      • Common guidelines for implementing human-machine interface (HMI)
  • Reliability, maintainability, and testability
    • Reliability (fault tolerance)
    • Critical points of fragility (e.g., system loading comprised of percent of processor, memory, and network loading)
    • Maintainability (e.g., expected mean time to repair (MTTR), maximum fault group size, whether the system can be operational during maintenance)
    • Accessibility (e.g., space restrictions, special tool requirements, special skill requirements)
    • Testability
      • Number of LRUs covered by built-in tests (BIT) (BIT coverage)
      • Reproducibility of errors
      • Logging/recording capability
      • Whether the system state at time of system failure can be recreated
      • Online testing (e.g., whether the system is operational during external testing and the ease of access to external test points)
      • Automated input/stimulation insertion

Other Points

The Virginia class submarine acquisition exhibited other best practices. These are discussed by Schank (2011), GAO (2008), and General Dynamics (2002).

These best practices included stringent design trades to keep costs under control, careful consideration of technical maturity of components, and the importance of program stability.

Summary

In summary, the work on the Virginia class submarine prompted a change in the traditional architectural approach used in the sonar community to design submarine sonar and validated the cost savings in both research and development (R&D) and in component costs when transitioning from proprietary interfaces to industry standard interfaces. The identification of a list of feasible architecture evaluation metrics was an added benefit of the effort.

References

Works Cited

GAO. 2008. Defense Acquisitions: Assessment of Selected Weapon Programs Report. Washington, DC, USA: US. Government Accountability Office (GAO). March 2009. GAO-09-326SP.

GD Electric Boat Division. 2002. The Virginia Class Submarine Program: A Case Study. Groton, CT: General Dynamics. February, 2002.

Schank, J.F. et al. 2011. Learning from Experience, Volume 2: Lessons from the U.S. Navy's Ohio, Seawolf, and Virginia Submarine Programs. Santa Monica, CA, USA: Rand. Available at http://www.rand.org/content/dam/rand/pubs/monographs/2011/RAND_MG1128.2.pdf

Primary References

None.

Additional References

None.


< Previous Article | Parent Article | Next Article >


SEBoK v. 1.9.1 released 30 September 2018

SEBoK Discussion

Please provide your comments and feedback on the SEBoK below. You will need to log in to DISQUS using an existing account (e.g. Yahoo, Google, Facebook, Twitter, etc.) or create a DISQUS account. Simply type your comment in the text field below and DISQUS will guide you through the login or registration steps. Feedback will be archived and used for future updates to the SEBoK. If you provided a comment that is no longer listed, that comment has been adjudicated. You can view adjudication for comments submitted prior to SEBoK v. 1.0 at SEBoK Review and Adjudication. Later comments are addressed and changes are summarized in the Letter from the Editor and Acknowledgements and Release History.

If you would like to provide edits on this article, recommend new content, or make comments on the SEBoK as a whole, please see the SEBoK Sandbox.

blog comments powered by Disqus