https://sandbox.sebokwiki.org/api.php?action=feedcontributions&user=Hle&feedformat=atomSEBoK - User contributions [en]2024-03-28T10:13:19ZUser contributionsMediaWiki 1.35.13https://sandbox.sebokwiki.org/index.php?title=Emerging_Topics&diff=63749Emerging Topics2022-04-17T20:33:40Z<p>Hle: /* Introduction to Systems Engineering Transformation */ updated to refer SE Vision 2035</p>
<hr />
<div>'''''Lead Author:''' Robert Cloutier''<br />
-----<br />
<br />
The Emerging Topics section is intended to introduce and inform the reader on significant and rapidly emerging needs and trends in practicing systems engineering within the community. It is not intended to be all-inclusive. Instead, those topics that have a high probability of significantly impacting the practice of systems engineering, as determined by the SEBoK editorial board, are covered. If the reader has recommendations of emerging topics that should be covered, please send an email to SEBoK@incose.org, or leave a comment in the comment feature at the bottom of this page.<br />
<br />
== Introduction to Systems Engineering Transformation ==<br />
The knowledge covered in this KA reflects the transformation and continued evolution of SE, which are formed by the current and future challenges (see [[Systems Engineering: Historic and Future Challenges]]). This notion of SE transformation and the other areas of knowledge which it includes are discussed briefly below.<br />
<br />
The INCOSE Systems Engineering Vision 2035 (INCOSE 2021) describes the global context for SE, the current state of SE practice and the possible future state of SE. It describes a number of ways in which SE continues to evolve to meet modern system challenges. These are summarized briefly below. <br />
<br />
Systems engineering has evolved from a combination of practices used in a number of related industries (particularly aerospace and defense). These have been used as the basis for a standardized approach to the life cycle of any complex system (see [[Systems Engineering and Management]]). Hence, SE practices are largely based on heuristics, with efforts under-way to evolve a theoretical foundation for systems engineering (see [[Foundations of Systems Engineering]]) considering foundational knowledge from a variety of sources. <br />
<br />
Systems engineering continues to evolve in response to a long history of increasing system '''complexity'''. Such complexity arises from human and societal needs, global megatrends, grand engineering challenges, and then are shaped by stakeholders expectations, and the enterprise environment. System solutions require both depth and breadth, and the design of those solutions must consider both technical and social aspects (see [[Socio-technical Systems]]). <br />
<br />
Many systems engineering practices have become standard (e.g. studies, risk analysis) while some other are in transitioning phase (e.g. [[Model-Based Systems Engineering (MBSE) (glossary)|Model-Based Systems Engineering]], [[Agile (glossary)|agile]], systems-of-systems). More recently, the rise of Artificial Intelligence (AI) introduces unprecedented challenges in verification and validation of AI-infused systems, but also opens up new opportunities to implement AI methodologies in the design of systems.<br />
<br />
Systems engineering has gained recognition across industries, academia and governments. However, SE practice varies across industries, organizations, and system types. Cross fertilization of systems engineering practices across industries has begun slowly but surely; however, the global need for systems capabilities has outpaced the progress in systems engineering. <br />
<br />
INCOSE Vision 2035 concludes that SE is poised to play a major role in some of the global challenges of the 21st century, that it has already begun to change to meet these challenges and that it needs to undergo a more significant '''transformation''' to fully meet these challenges. The following bullet points are taken from the summary section of Vision 2035 and define the attributes of a transformed SE discipline in the future:<br />
* The future of systems engineering is model-based, enabled by enterprise digital transformation.<br />
* Systems engineering practices will make significant advancements to deal with systems complexity and enable enterprise agility.<br />
* Systems engineering will leverage practices from other disciplines such as data science to help manage the growth in data.<br />
* Formal systems engineering theoretical foundations will be codified leading to new research and development in the next generation of systems engineering methods and tools.<br />
* AI will both impact the systems engineering practice and the types of systems designed by the systems engineering community.<br />
* There will be a step change in systems engineering education starting with early education with a heavy focus on lifelong learning.<br />
Some of these future directions of SE are covered in the SEBoK. Others need to be introduced and fully integrated into the SE knowledge areas as they evolve. This KA will be used to provide an overview of these transforming aspects of SE as they emerge. This transformational knowledge will be integrated into all aspects of the SEBoK as it matures.<br />
<br />
==Topics in Part 8==<br />
<br />
*[[Transitioning Systems Engineering to a Model-based Discipline]]<br />
*[[Model-Based Systems Engineering Adoption Trends 2009-2018]]<br />
*[[Digital Engineering]]<br />
*[[Set-Based Design]]<br />
*[[Socio-technical Systems]]<br />
*[[Systems Engineering and Artificial Intelligence]]<br />
==References==<br />
===Works Cited===<br />
None.<br />
<br />
===Primary References===<br />
None.<br />
<br />
===Additional References===<br />
None.<br />
----<br />
<center>[[Emerging Knowledge|< Previous Article]] | [[Emerging Knowledge|Parent Article]] | [[Introduction to SE Transformation|Next Article >]]</center><br />
<br />
<center>'''SEBoK v. 2.6, released 13 May 2022'''</center><br />
<br />
[[Category: Part 8]]<br />
[[Category:Knowledge Area]]</div>Hlehttps://sandbox.sebokwiki.org/index.php?title=Verification_and_Validation_of_Systems_in_Which_AI_is_a_Key_Element&diff=61081Verification and Validation of Systems in Which AI is a Key Element2021-05-12T20:52:36Z<p>Hle: Added links to verification and validation in glossary.</p>
<hr />
<div>'''''Lead Author:''''' ''Laura Pullum''<br />
----Many systems are being considered in which artificial intelligence (AI) will be a key element. Failure of an AI element can lead to system failure (Dreossi et al 2017), hence the need for AI [[Verification (glossary)|verification]] and [[Validation (glossary)|validation]] (V&V). The element(s) containing AI capabilities is treated as a subsystem and V&V is conducted on that subsystem and its interfaces with other elements of the system under study, just as V&V would be conducted on other subsystems. That is, the high-level definitions of V&V do not change for systems containing one or more AI elements.<br />
<br />
However, AI V&V challenges require approaches and solutions beyond those for conventional or traditional (those without AI elements) systems. This article provides an overview of how machine learning components/subsystems “fit” in the systems engineering framework, identifies characteristics of AI subsystems that create challenges in their V&V, illuminates those challenges, and provides some potential solutions while noting open or continuing areas of research in the V&V of AI subsystems.<br />
<br />
== Overview of V&V for AI-based Systems ==<br />
Conventional systems are engineered via 3 overarching phases, namely, requirements, design and V&V. These phases are applied to each subsystem and to the system under study. As shown in Figure 1, this is the case even if the subsystem is based on AI techniques.<br />
<br />
[Figure 1]<br />
<br />
AI-based systems follow a different lifecycle than do traditional systems. As shown in the general machine learning life cycle illustrated in Figure 2, V&V activities occur throughout the life cycle. In addition to requirements allocated to the AI subsystem (as is the case for conventional subsystems), there also may be requirements for data that flow up to the system from the AI subsystem.<br />
<br />
[Figure 2]<br />
<br />
== Characteristics of AI Leading to V&V Challenges ==<br />
Though some aspects of V&V for conventional systems can be used without modification, there are important characteristics of AI subsystems that lead to challenges in their verification and validation. In a survey of engineers, Ishikawa and Yoshioka (2019) identify attributes of machine learning that make the engineering of same difficult. According to the engineers surveyed, the top attributes with a summary of the engineers’ comments are:<br />
* ''Lack of an oracle'': It is difficult or impossible to clearly define the correctness criteria for system outputs or the right outputs for each individual input.<br />
* ''Imperfection'': It is intrinsically impossible to for an AI system to be 100% accurate.<br />
* ''Uncertain behavior for untested data'': There is high uncertainty about how the system will behave in response to untested input data, as evidenced by radical changes in behavior given slight changes in input (e.g., adversarial examples).<br />
* ''High dependency of behavior on training data'': System behavior is highly dependent on the training data.<br />
These attributes are characteristic of AI itself and can be generalized as follows:<br />
* Erosion of determinism<br />
* Unpredictability and unexplainability of individual outputs (Sculley et al., 2014)<br />
* Unanticipated, emergent behavior, and unintended consequences of algorithms<br />
* Complex decision making of the algorithms<br />
* Difficulty of maintaining consistency and weakness against slight changes in inputs (Goodfellow et al., 2015)<br />
<br />
== V&V Challenges of AI Systems ==<br />
<br />
=== Requirements ===<br />
Challenges with respect to AI requirements and AI requirements engineering are extensive and due in part to the practice by some to treat the AI element as a “black box” (Gunning 2016). Formal specification has been attempted and has shown to be difficult for those hard-to-formalize tasks and requires decisions on the use of quantitative or Boolean specifications and the use of data and formal requirements. The challenge here is to design effective methods to specify both desired and undesired properties of systems that use AI- or ML-based components (Seshia 2020). <br />
<br />
A taxonomy of AI requirements engineering challenges, outlined by Belani and colleagues (2019), is shown in Table 3. <br />
{| class="wikitable"<br />
|+Table 3: Requirements engineering for AI (RE4AI) taxonomy, mapping challenges to AI-related entities and requirements engineering activities (after (Belani et al., 2019))<br />
!RE4AI<br />
! colspan="3" |AI Related Entities<br />
|-<br />
|'''RE Activities'''<br />
|'''Data'''<br />
|'''Model'''<br />
|'''System'''<br />
|-<br />
|'''Elicitation'''<br />
|<nowiki>- Availability of large datasets</nowiki><br />
<br />
- Requirements analyst upgrade<br />
|<nowiki>- Lack of domain knowledge</nowiki><br />
<br />
- Undeclared consumers<br />
|<nowiki>- How to define problem /scope</nowiki><br />
<br />
- Regulation (e.g., ethics) not clear<br />
|-<br />
|'''Analysis'''<br />
|<nowiki>- Imbalanced datasets, silos</nowiki><br />
<br />
- Role: data scientist needed<br />
|<nowiki>- No trivial workflows</nowiki><br />
<br />
- Automation tools needed<br />
|<nowiki>- No integration of end results</nowiki><br />
<br />
- Role: business analyst upgrade<br />
|-<br />
|'''Specification'''<br />
|<nowiki>- Data labelling is costly, needed</nowiki><br />
<br />
- Role: data engineer needed<br />
|<nowiki>- No end-to-end pipeline support</nowiki><br />
<br />
- Minimum viable model useful<br />
|<nowiki>- Avoid design anti- patterns</nowiki><br />
<br />
- Cognitive / system architect needed<br />
|-<br />
|'''Validation'''<br />
|<nowiki>- Training data critical analysis</nowiki><br />
<br />
- Data dependencies<br />
|<nowiki>- Entanglement, CACE problem</nowiki><br />
<br />
- High scalability issues for ML<br />
|<nowiki>- Debugging, interpretability</nowiki><br />
<br />
- Hidden feedback loops<br />
|-<br />
|'''Management'''<br />
|<nowiki>- Experiment management</nowiki><br />
<br />
- No GORE-like method polished<br />
|<nowiki>- Difficult to log and reproduce</nowiki><br />
<br />
- DevOps role for AI needed<br />
|<nowiki>- IT resource limitations, costs</nowiki><br />
<br />
- Measuring performance<br />
|-<br />
|'''Documentation'''<br />
|<nowiki>- Data & model visualization</nowiki><br />
<br />
- Role: research scientist useful<br />
|<nowiki>- Datasets and model versions</nowiki><br />
<br />
- Education and training of staff<br />
|<nowiki>- Feedback from end-users</nowiki><br />
<br />
- Development method<br />
|-<br />
|'''All of the Above'''<br />
| colspan="3" | - Data privacy and data safety<br />
<br />
- Data dependencies<br />
|}<br />
CACE: change anything, change everything<br />
<br />
GORE: goal-oriented requirements engineering<br />
<br />
=== Data ===<br />
Data is the life-blood of AI capabilities given that it is used to train and evaluate AI models and produce their capabilities. Data quality attributes of importance to AI include accuracy, currency and timeliness, correctness, consistency, in addition to usability, security and privacy, accessibility, accountability, scalability, lack of bias and others. As noted above, the correctness of unsupervised methods is embedded in the training data and the environment.<br />
<br />
There is a question of coverage of the operational space by the training data. If the data does not adequately cover the operational space, the behavior of the AI component is questionable. However, there are no strong guarantees on when a data set it ‘large enough’. In addition, ‘large’ is not sufficient. The data must sufficiently cover the operational space.<br />
<br />
Another challenge with data is that of adversarial inputs. Szegedy et al. (2014) discovered that several ML models are vulnerable to adversarial examples. This has been shown many times on image classification software, however, adversarial attacks can be made against other AI tasks (e.g., natural language processing) and against techniques other than neural networks (typically used in image classification) such as reinforcement learning (e.g., reward hacking) models.<br />
<br />
=== Model ===<br />
Numerous V&V challenges arise in the model space, some of which are provided below.<br />
* ''Modeling the environment'': Unknown variables, determining the correct fidelity to model, modeling human behavior. The challenge problem is providing a systematic method of environment modeling that allows one to provide provable guarantees on the system’s behavior even when there is considerable uncertainty about the environment. (Seshia 2020)<br />
* ''Modeling learning systems'': Very high dimensional input space, very high dimensional parameter or state space, online adaptation/evolution, modeling context (Seshia 2020).<br />
* ''Design and verification of models and data'': data generation, quantitative verification, compositional reasoning, and compositional specification (Seshia 2020). The challenge is to develop techniques for compositional reasoning that do not rely on having complete compositional specifications (Seshia 2017).<br />
* ''Optimization strategy must balance between over- and under-specification''. One approach, instead of using distance (between predicted and actual results) measures, uses the cost of an erroneous result (e.g., an incorrect classification) as a criterion (Faria, 2018) (Varshney, 2017).<br />
* ''Online learning'': requires monitoring; need to ensure its exploration does not result in unsafe states.<br />
* ''Formal methods'': intractable state space explosion from complexity of the software and the system’s interaction with its environment, an issue with formal specifications.<br />
* ''Bias'' in algorithms from underrepresented or incomplete training data OR reliance on flawed information that reflects historical inequities. A biased algorithm may lead to decisions with collective disparate impact. Trade-off between fairness and accuracy in the mitigation of an algorithm’s bias.<br />
* ''Test coverage'': effective metrics for test coverage of AI components is an active area of research with several candidate metrics, but currently no clear best practice.<br />
<br />
=== Properties ===<br />
Assurance of several AI system properties is necessary to enable trust in the system, e.g., the system’s trustworthiness. This is a separate though necessary aspect of system dependability for AI systems. Some important properties are listed below and though extensive, are not comprehensive.<br />
* ''Accountability'': refers to the need of an AI system to be answerable for its decisions, actions and performance to users and others with whom the AI system interacts<br />
* ''Controllability'': refers to the ability of a human or other external agent to intervene in the AI system’s functioning<br />
* ''Explainability'': refers to the property of an AI system to express important factors influencing the AI system results or to provide details/reasons behind its functioning so that humans can understand<br />
* ''Interpretability'': refers to the degree to which a human can understand the cause of a decision (Miller 2017)<br />
* ''Reliability'': refers to the property of consistent intended behavior and results<br />
* ''Resilience'': refers to the ability of a system to recover operations quickly following an incident<br />
* ''Robustness'': refers to the ability of a system to maintain its level of performance when errors occur during execution and to maintain that level of performance given erroneous inputs and parameters<br />
* ''Safety'': refers to the freedom from unacceptable risk<br />
* ''Transparency'': refers to the need to describe, inspect and reproduce the mechanisms through which AI systems make decisions, communicating this to relevant stakeholders.<br />
<br />
== V&V Approaches and Standards ==<br />
<br />
=== V&V Approaches ===<br />
Prior to the proliferation of deep learning, research on V&V of neural networks touched on adaptation of available standards, such as the then-current IEEE Std 1012 (Software Verification and Validation) processes (Pullum et al. 2007), areas need to be augmented to enable V&V (Taylor 2006), and examples of V&V for high-assurance systems with neural networks (Schumann et al., 2010). While these books provide techniques and lessons learned, many of which remain relevant, additional challenges due to deep learning remain unsolved.<br />
<br />
One of the challenges is data validation. It is vital that the data upon which AI depends undergo V&V. Data quality attributes that are important for AI systems include accuracy, currency and timeliness, correctness, consistency, usability, security and privacy, accessibility, accountability, scalability, lack of bias, and coverage of the state space. Data validation steps can include file validation, import validation, domain validation, transformation validation, aggregation rule and business validation (Gao et al. 2011). <br />
<br />
There are several approaches to V&V of AI components, including formal methods (e.g., formal proofs, model checking, probabilistic verification), software testing, simulation-based testing and experiments. Some specific approaches are:<br />
* Metamorphic testing to test ML algorithms, addressing the oracle problem (Xie et al., 2011)<br />
* A ML test score consisting of tests for features and data, model development and ML infrastructure, and monitoring tests for ML (Breck et al., 2016)<br />
* Checking for inconsistency with desired behavior and systematically searching for worst-case outcomes when testing consistency with specifications.<br />
* Corroborative verification (Webster et al., 2020), in which several verification methods, working at different levels of abstraction and applied to the same AI component, may prove useful to verification of AI components of systems.<br />
* Testing against strong adversarial attacks (Useato, 2018); researchers have found that models may show robustness to weak adversarial attacks and show little to no accuracy to strong attacks (Athalye et al., 2018, Uesato et al., 2018, Carlini and Wagner, 2017).<br />
* Use of formal verification to prove that models are consistent with specifications, e.g., (Huang et al., 2017).<br />
<br />
* Assurance cases combining the results of V&V and other activities as evidence to support claims on the assurance of systems with AI components (Kelly and Weaver, 2004; Picardi et al. 2020).<br />
<br />
=== Standards ===<br />
Standards development organizations (SDO) are earnestly working to develop standards in AI, including the safety and trustworthiness of AI systems. Below are just a few of the SDOs and their AI standardization efforts.<br />
<br />
ISO is the first international SDO to set up an expert group to carry out standardization activities for AI. Subcommittee (SC) 42 is part of the joint technical committee ISO/IEC JTC 1. SC 42 has a working group on foundational standards to provide a framework and a common vocabulary, and several other working groups on computational approaches to and characteristics of AI systems, trustworthiness, use cases, applications, and big data. (https://www.iso.org/committee/6794475.html)<br />
<br />
The IEEE P7000 series of projects are part of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, launched in 2016. IEEE P7009, “Fail-Safe Design of Autonomous and Semi-Autonomous Systems” is one of 13 standards in the series. (https://standards.ieee.org/project/7009.html)<br />
<br />
Underwriters Laboratory has been involved in technology safety for 125 years and has released ANSI/UL 4600 “Standard for Safety for the Evaluation of Autonomous Products”. (<nowiki>https://ul.org/UL4600</nowiki>)<br />
<br />
The SAE G-34, Artificial Intelligence in Aviation, Committee is responsible for creating and maintaining SAE Technical Reports, including standards, on the implementation and certification aspects related to AI technologies inclusive of any on or off-board system for the safe operation of aerospace systems and aerospace vehicles. (https://www.sae.org/works/committeeHome.do?comtID=TEAG34)<br />
<br />
==References==<br />
<br />
===Works Cited===<br />
Belani, Hrvoje, Marin Vuković, and Željka Car. Requirements Engineering Challenges in Building AI-Based Complex Systems. 2019. IEEE 27<sup>th</sup> International Requirements Engineering Conference Workshops (REW).<br />
<br />
Breck, Eric, Shanqing Cai, Eric Nielsen, Michael Salib and D. Sculley. What’s your ML Test Score? A Rubric for ML Production Systems. 2016. 30<sup>th</sup> Conference on Neural Information Processing Systems (NIPS 2016), Barcelona Spain.<br />
<br />
Daume III, Hal, and Daniel Marcu. Domain adaptation for statistical classifiers. ''Journal of Artificial Intelligence Research'', 26:101–126, 2006.<br />
<br />
Dreossi, T., A. Donzé, S.A. Seshia. Compositional falsification of cyber-physical systems with machine learning components. In Barrett, C., M. Davies, T. Kahsai (eds.) NFM 2017. LNCS, vol. 10227, pp. 357-372. Springer, Cham (2017). <nowiki>https://doi.org/10.1007/978-3-319-57288-8_26</nowiki><br />
<br />
Faria, José M. Machine learning safety: An overview. In ''Proceedings of the 26th Safety-Critical Systems Symposium'', York, UK, February 2018.<br />
<br />
Farrell, M., Luckcuck, M., Fisher, M. Robotics and Integrated Formal Methods. Necessity Meets Opportunity. In: ''Integrated Formal Methods''. pp. 161-171. Springer (2018).<br />
<br />
Gao, Jerry, Chunli Xie, and Chuanqi Tao. 2016. Big Data Validation and Quality Assurance – Issues, Challenges and Needs. 2016 IEEE Symposium on Service-Oriented System Engineering (SOSE), Oxford, UK, 2016, pp. 433-441, doi: 10.1109/SOSE.2016.63.<br />
<br />
Gleirscher, M., Foster, S., Woodcock, J. New Opportunities for Integrated Formal Methods. ''ACM Computing Surveys'' 52(6), 1-36 (2020).<br />
<br />
Goodfellow, Ian, J. Shlens, C. Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR), May 2015.<br />
<br />
Gunning, D. Explainable Artificial Intelligence (XAI). In IJCAI 2016 Workshop on Deep Learning for Artificial Intelligence (DLAI), July 2016.<br />
<br />
Huang, X., M. Kwiatkowska, S. Wang, and M. Wu. Safety Verification of deep neural networks. In. Majumdar, R., and V. Kunčak (eds.) CAV 2017. LNCS, vol. 10426, pp. 3-29. Springer, Cham (2017). <nowiki>https://doi.org/10.1007/978-3-319-63387-9_1</nowiki><br />
<br />
Ishikawa, Fuyuki and Nobukazu Yoshioka. How do Engineers Perceive Difficulties in Engineering of Machine-Learning Systems? - Questionnaire Survey. 2019 IEEE/ACM Joint 7th International Workshop on Conducting Empirical Studies in Industry (CESI) and 6th International Workshop on Software Engineering Research and Industrial Practice (SER&IP) (2019)<br />
<br />
Jones, Cliff B. Tentative steps toward a development method for interfering programs. ''ACM Transactions on Programming Languages and Systems'' (TOPLAS), 5(4):596–619, 1983.<br />
<br />
Kelly, T., and R. Weaver. The goal structuring notation – a safety argument notation. In Dependable Systems and Networks 2004 Workshop on Assurance Cases, July 2004.<br />
<br />
Klein, G., Andronick, J., Fernandez, M., Kuz, I., Murray, T., Heiser, G. Formally verified software in the real world. ''Comm. of the ACM'' 61(10), 68-77 (2018).<br />
<br />
Kuwajima, Hiroshi, Hirotoshi Yasuoka, and Toshihiro Nakae. Engineering problems in machine learning systems. ''Machine Learning'' (2020) 109:1103–1126. <nowiki>https://doi.org/10.1007/s10994-020-05872-w</nowiki><br />
<br />
Lwakatare, Lucy Ellen, Aiswarya Raj, Ivica Crnkovic, Jan Bosch, and Helena Holmström Olsson. Large-scale machine learning systems in real-world industrial settings: A review of challenges and solutions. ''Information and Software Technology'' 127 (2020) 106368<br />
<br />
Luckcuck, M., Farrell, M., Dennis, L.A., Dixon, C., Fisher, M. Formal Specification and Verification of Autonomous Robotic Systems: A Survey. ''ACM Computing Surveys'' 52(5), 1-41 (2019).<br />
<br />
Marijan, Dusica and Arnaud Gotlieb. Software Testing for Machine Learning. The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) (2020)<br />
<br />
Miller, Tim. Explanation in artificial intelligence: Insights from the social sciences. arXiv Preprint arXiv:1706.07269. (2017).<br />
<br />
Pei, K., Y. Cao, J Yang, and S. Jana. DeepXplore: automated whitebox testing of deep learning systems. In The 26<sup>th</sup> Symposium on Operating Systems Principles (SOSP 2017), pp. 1-18, October 2017.<br />
<br />
Picardi, Chiara, Paterson, Colin, Hawkins, Richard David et al. (2020) Assurance Argument Patterns and Processes for Machine Learning in Safety-Related Systems. In: ''Proceedings of the Workshop on Artificial Intelligence Safety'' (SafeAI 2020). CEUR Workshop Proceedings, pp. 23-30.<br />
<br />
Pullum, Laura L., Brian Taylor, and Marjorie Darrah, ''Guidance for the Verification and Validation of Neural Networks'', IEEE Computer Society Press (Wiley), 2007.<br />
<br />
Rozier, K.Y. Specification: The Biggest Bottleneck in Formal Methods and Autonomy. In: ''Verified Software. Theories, Tools, and Experiments''. pp. 8-26. Springer (2016).<br />
<br />
Schumann, Johan, Pramod Gupta and Yan Liu. Application of neural networks in High Assurance Systems: A Survey. In ''Applications of Neural Networks in High Assurance Systems'', Studies in Computational Intelligence, pp. 1-19. Springer, Berlin, Heidelberg, 2010.<br />
<br />
Sculley, D., Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, Vinay Chaudhary, Michael Young, Jean-François Crespo, and Dan Dennison. Machine Learning: the high interest credit card of technical debt. In NIPS 2014 Workshop on Software Engineering for Machine Learning (SE4ML), December 2014.<br />
<br />
Seshia, Sanjit A. Compositional verification without compositional specification for learning-based systems. Technical Report UCB/EECS-2017-164, EECS Department, University of California, Berkeley, Nov 2017.<br />
<br />
Seshia, Sanjit A., Dorsa Sadigh, and S. Shankar Sastry. Towards Verified Artificial Intelligence. arXiv:1606.08514v4 [cs.AI] 23 Jul 2020.<br />
<br />
Szegedy, Christian, Zaremba, Wojciech, Sutskever, Ilya, Bruna, Joan, Erhan, Dumitru, Goodfellow, Ian J., and Fergus, Rob. Intriguing properties of neural networks. ICLR, abs/1312.6199, 2014b. URL <nowiki>http://arxiv.org/abs/1312.6199</nowiki>.<br />
<br />
Taylor, Brian, ed. ''Methods and Procedures for the Verification and Validation of Artificial Neural Networks'', Springer-Verlag, 2005.<br />
<br />
Thompson, E. (2007). ''Mind in life: Biology, phenomenology, and the sciences of mind''. Cambridge, MA: Harvard University Press.<br />
<br />
Tiwari, Ashish, Bruno Dutertre, Dejan Jovanović, Thomas de Candia, Patrick D. Lincoln, John Rushby, Dorsa Sadigh, and Sanjit Seshia. Safety envelope for security. In ''Proceedings of the'' ''3rd International Conference on High Confidence Networked Systems'' (HiCoNS), pp. 85-94, Berlin, Germany, April 2014. ACM.<br />
<br />
Uesato, Jonathan, O’Donoghue, Brendan, van den Oord, Aaron, Kohli, Pushmeet. Adversarial Risk and the Dangers of Evaluating Against Weak Attacks. ''Proceedings of the 35<sup>th</sup> International Conference on Machine Learning'', Stockholm, Sweden, PMLR 80, 2018.<br />
<br />
Varshney, Kush R., and Homa Alemzadeh. On the safety of machine learning: Cyber-physical systems, decision sciences, and data products. ''Big Data'', 5(3):246–255, 2017.<br />
<br />
Webster, M., Wester, D.G., Araiza-Illan, D., Dixon, C., Eder, K., Fisher, M., Pipe, A.G. A corroborative approach to verification and validation of human-robot teams. ''J. Robotics Research'' 39(1) (2020).<br />
<br />
Xie, Xiaoyuan, J.W.K. Ho, C. Murphy, G. Kaiser, B. Xu, and T.Y. Chen. 2011. “Testing and Validating Machine Learning Classifiers by Metamorphic Testing,” ''Journal of Software Testing'', April 1, 84(4): 544-558, doi:10.1016/j.jss.2010.11.920.<br />
<br />
Zhang, J., Li, J. Testing and verification of neural-network-based safety-critical control software: A systematic literature review. ''Information and Software Technology'' 123, 106296 (2020).<br />
<br />
Zhang, J.M., Harman, M., Ma, L., Liu, Y. Machine learning testing: Survey, landscapes and horizons. ''IEEE Transactions on Software Engineering''. 2020, doi: 10.1109/TSE.2019.2962027.<br />
<br />
===Primary References===<br />
<br />
Belani, Hrvoje, Marin Vuković, and Željka Car. Requirements Engineering Challenges in Building AI-Based Complex Systems. 2019. IEEE 27<sup>th</sup> International Requirements Engineering Conference Workshops (REW).<br />
<br />
Dutta, S., Jha, S., Sankaranarayanan, S., Tiwari, A. 2018. Output range analysis for deep feedforward neural networks. In: NASA Formal Methods. pp. 121-138.<br />
<br />
Gopinath, D., G. Katz, C. Pāsāreanu, and C. Barrett. 2018. DeepSafe: A Data-Driven Approach for Assessing Robustness of Neural Networks. In: ''ATVA''.<br />
<br />
Huang, X., M. Kwiatkowska, S. Wang and M. Wu. 2017. Safety Verification of Deep Neural Networks. Computer Aided Verification.<br />
<br />
Jha, S., V. Raman, A. Pinto, T. Sahai, and M. Francis. 2017. On Learning Sparse Boolean Formulae for Explaining AI Decisions, ''NASA Formal Methods''.<br />
<br />
Katz, G., C. Barrett, D. Dill, K. Julian, M. Kochenderfer. 2017. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks, <nowiki>https://arxiv.org/abs/1702.01135</nowiki>.<br />
<br />
Leofante, F., N. Narodytska, L. Pulina, A. Tacchella. 2018. Automated Verification of Neural Networks: Advances, Challenges and Perspectives, <nowiki>https://arxiv.org/abs/1805.09938</nowiki> Marijan, Dusica and Arnaud Gotlieb. Software Testing for Machine Learning. The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) (2020)<br />
<br />
Mirman, M., T. Gehr, and M. Vechev. 2018. Differentiable Abstract Interpretation for Provably Robust Neural Networks. ''International Conference on Machine Learning''.<br />
<br />
Pullum, Laura L., Brian Taylor, and Marjorie Darrah, ''Guidance for the Verification and Validation of Neural Networks'', IEEE Computer Society Press (Wiley), 2007.<br />
<br />
Seshia, Sanjit A., Dorsa Sadigh, and S. Shankar Sastry. Towards Verified Artificial Intelligence. arXiv:1606.08514v4 [cs.AI] 23 Jul 2020.<br />
<br />
Taylor, Brian, ed. ''Methods and Procedures for the Verification and Validation of Artificial Neural Networks'', Springer-Verlag, 2005.<br />
<br />
Xiang, W., P. Musau, A. Wild, D.M. Lopez, N. Hamilton, X. Yang, J. Rosenfeld, and T. Johnson. 2018. Verification for Machine Learning, Autonomy, and Neural Networks Survey. <nowiki>https://arxiv.org/abs/1810.01989</nowiki><br />
<br />
Zhang, J., Li, J. Testing and verification of neural-network-based safety-critical control software: A systematic literature review. ''Information and Software Technology'' 123, 106296 (2020).<br />
<br />
===Additional References===<br />
Jha, Sumit Kumar, Susmit Jha, Rickard Ewetz, Sunny Raj, Alvaro Velasquez, Laura L. Pullum, and Ananthram Swami. An Extension of Fano’s Inequality for Characterizing Model Susceptibility to Membership Inference Attacks. arXiv:2009.08097v1 [cs.LG] 17 Sep 2020.<br />
<br />
Sunny Raj, Mesut Ozdag, Steven Fernandes, Sumit Kumar Jha, Laura Pullum, “On the Susceptibility of Deep Neural Networks to Natural Perturbations,” ''AI Safety 2019'' (held in conjunction with IJCAI 2019 - International Joint Conference on Artificial Intelligence), Macao, China, August 2019.<br />
<br />
Ak, R., R. Ghosh, G. Shao, H. Reed, Y.-T. Lee, L.L. Pullum. “Verification-Validation and Uncertainty Quantification Methods for Data-Driven Models in Advanced Manufacturing,” ''ASME Verification and Validation Symposium'', Minneapolis, MN, 2018.<br />
<br />
Pullum, L.L., C.A. Steed, S.K. Jha, and A. Ramanathan. “Mathematically Rigorous Verification and Validation of Scientific Machine Learning,” ''DOE Scientific Machine Learning Workshop'', Bethesda, MD, Jan/Feb 2018.<br />
<br />
Ramanathan, A., L.L. Pullum, Zubir Husein, Sunny Raj, Neslisah Totosdagli, Sumanta Pattanaik, and S.K. Jha. 2017. “Adversarial attacks on computer vision algorithms using natural perturbations.” In ''2017 10th International Conference on Contemporary Computing (IC3)''. Noida, India. August 2017.<br />
<br />
Raj, S., L.L. Pullum, A. Ramanathan, and S.K. Jha. 2017. “Work in Progress: Testing Autonomous cyber-physical systems using fuzzing features derived from convolutional neural networks.” In ''ACM SIGBED International Conference on Embedded Software'' (EMSOFT). Seoul, South Korea. October 2017.<br />
<br />
Raj, S., L.L. Pullum, A. Ramanathan, and S.K. Jha, “SATYA: Defending against Adversarial Attacks using Statistical Hypothesis Testing,” in ''10th International Symposium on Foundations and Practice of Security'' (FPS 2017), Nancy, France. (Best Paper Award), 2017.<br />
<br />
Ramanathan, A., Pullum, L.L., S. Jha, et al. “Integrating Symbolic and Statistical Methods for Testing Intelligent Systems: Applications to Machine Learning and Computer Vision.” ''IEEE Design, Automation & Test in Europe''(DATE), 2016.<br />
<br />
Pullum, L.L., C. Rouff, R. Buskens, X. Cui, E. Vassiv, and M. Hinchey, “Verification of Adaptive Systems,” ''AIAA Infotech@Aerospace'' 2012, April 2012. <br />
<br />
Pullum, L.L., and C. Symons, “Failure Analysis of a Complex Learning Framework Incorporating Multi-Modal and Semi-Supervised Learning,” In ''IEEE Pacific Rim International Symposium on Dependable Computing''(PRDC 2011), 308-313, 2011. <br />
<br />
Haglich, P., C. Rouff, and L.L. Pullum, “Detecting Emergent Behaviors with Semi-Boolean Algebra,” ''Proceedings of AIAA Infotech @ Aerospace'', 2010. <br />
<br />
Pullum, L.L., Marjorie A. Darrah, and Brian J. Taylor, “Independent Verification and Validation of Neural Networks – Developing Practitioner Assistance,” ''Software Tech News'', July 2004.<br />
----<br />
<br />
<center>[[Socio-technical Systems|< Previous Article]] | [[Emerging Topics|Parent Article]] | [[Transitioning Systems Engineering to a MOdel-based Discipline|Next Article >]]</center><br />
<br />
<center>'''SEBoK v. 2.3, released 30 October 2020'''</center><br />
<br />
[[Category: Part 8]]<br />
[[Category:Topic]]<br />
[[Category:Emerging Topics]]</div>Hlehttps://sandbox.sebokwiki.org/index.php?title=Emerging_Knowledge&diff=61080Emerging Knowledge2021-05-12T20:48:07Z<p>Hle: Edited introduction and last sentence of Scope and Purpose section.</p>
<hr />
<div>-----<br />
'''''Lead Authors:''' Robert Cloutier, Daniel DeLaurentis, Ha Phuong Le''<br />
-----<br />
<br />
Like other portions of the SEBoK, the notion and content of Part 8 is evolving. Part 8 consists of two Knowledge Areas (KAs): Emerging Topics and Emerging Research. <br />
<br />
[[File:SEBoK_Context_Diagram_Inner_P8_Ifezue_Obiako.png|centre|thumb|500x500px|'''Figure 1. SEBoK Part 8 in context (SEBoK Original).''' For more detail see [[Structure of the SEBoK]]]]<br />
<br />
==Scope and Purpose== <br />
While the practice and need for systems engineering began appearing in journals from 1950 onward, the practice currently seems to be gaining momentum in most engineering and even non-engineering circles.<br />
<br />
The classically trained systems engineers of the 1970s and even 1980s are faced with a C note shift in thinking brought on by the rapid advance of the software centricity of our systems, cybersecurity, agent-based, object-oriented, and model-based practices. These emerging practices bring their own methods and tools. Hall (1962, p. 5) may have been prescient when he wrote “It is hard to say whether increasing complexity is the cause or the effect of man's effort to cope with his expanding environment. In either case a central feature of the trend has been the development of large and very complex systems which tie together modern society. These systems include abstract or non-physical systems, such as government and the economic system.”<br />
<br />
These changes and the rate of change are causing systems engineering to evolve. Some of the practices may not even be recognizable to classically trained systems engineers. This Part of the SEBoK is intended to introduce some of the more significant emerging changes to systems engineering.<br />
As topics discussed in this Part evolve and become mainstream, they will be moved into the appropriate Part of the SEBoK.<br />
<br />
System of Systems Engineering (SoSE) provides examples in recent times of an emerging topic from Systems Engineering community that generated emerging research, ultimately resulting in a foundational body of knowledge that continues to expand. A recent article describing this evolution from emerging topic to solution is now referenced in Part 4 - [[Systems of Systems (SoS)]].<br />
<br />
==Overview of Emerging Topics==<br />
''See further: [[Emerging Topics]]''<br />
<br />
The Emerging Topics section is meant to inform the reader on the more significant and emerging changes to the practice of systems engineering. Examples of these emerging topics include:<br />
<br />
* What is the potential to change systems engineering processes or the ways in which we perform systems engineering?<br />
*How will the development of artificial intelligence impact systems engineering?<br />
**Will AI change the way we think of systems architecture? <br />
**How will we perform V&V of an AI system? <br />
*How will the push towards vertically integrated digital engineering influence systems engineering?<br />
*How are social features becoming more tightly connected to technical features of systems, and how is the modeling of socio-technical systems infusing into practice?<br />
<br />
==Overview of Emerging Research==<br />
''See further: [[Emerging Research]]'' <br />
<br />
As these emerging topics gain visibility, researchers will begin to investigate them. Corporate R&D may do early work, but academia and government will formalize this research. The Emerging Research section is a place to gather the references to this disparate work into a single repository to better inform systems engineers working on related topics. The references are collected from the following sources: <br />
* PhD dissertations<br />
* INCOSE publications and events <br />
* IEEE publications and events<br />
* Research funded by National Science Foundation (NSF) – Engineering Design and Systems Engineering (EDSE)<br />
* Research funded by Systems Engineering Research Center (SERC)<br />
<br />
==References==<br />
===Works Cited===<br />
Hall, Arthur D. (1962). ''A Methodology for Systems Engineering.'' New York, NY, USA: Van Nostrand.<br />
<br />
===Additional References===<br />
Engstrom, E.W. (1957). "Systems engineering: A growing concept," in Electrical Engineering, vol. 76, no. 2, pp. 113-116, Feb. 1957, doi: 10.1109/EE.1957.6442968.<br />
<br />
Goode, H. Herbert., Machol, R. Engel. (1957). ''System Engineering: An Introduction to the Design of Large-Scale Systems.'' New York, NY, USA: McGraw-Hill.<br />
<br />
Kelly, Mervin J. (1950). “The Bell Telephone Laboratories—An example of an institute of creative technology”. Proceedings of the Royal Society B. Vol. 137, Issue 889. https://doi.org/10.1098/rspb.1950.0050.<br />
<br />
<center>[[Singapore Water Management|< Previous Article]] | [[SEBoK Table of Contents|Parent Article]] | [[Emerging Topics|Next Article >]]</center><br />
<br />
<center>'''SEBoK v. 2.4, released 17 May 2021'''</center><br />
<br />
[[Category: Part 8]]<br />
[[Category:Part]]</div>Hlehttps://sandbox.sebokwiki.org/index.php?title=Verification_and_Validation_of_Systems_in_Which_AI_is_a_Key_Element&diff=60739Verification and Validation of Systems in Which AI is a Key Element2021-04-23T04:41:30Z<p>Hle: added author's name</p>
<hr />
<div>'''''Lead Author:''''' ''Laura Pullum''<br />
----Many systems are being considered in which artificial intelligence (AI) will be a key element. Failure of an AI element can lead to system failure (Dreossi et al 2017), hence the need for AI verification and validation (V&V). The element(s) containing AI capabilities is treated as a subsystem and V&V is conducted on that subsystem and its interfaces with other elements of the system under study, just as V&V would be conducted on other subsystems. That is, the high-level definitions of verification and of validation do not change for systems containing one or more AI elements.<br />
<br />
However, AI V&V challenges require approaches and solutions beyond those for conventional or traditional (those without AI elements) systems. This article provides an overview of how machine learning components/subsystems “fit” in the systems engineering framework, identifies characteristics of AI subsystems that create challenges in their V&V, illuminates those challenges, and provides some potential solutions while noting open or continuing areas of research in the V&V of AI subsystems.<br />
<br />
== Overview of V&V for AI-based Systems ==<br />
Conventional systems are engineered via 3 overarching phases, namely, requirements, design and V&V. These phases are applied to each subsystem and to the system under study. As shown in Figure 1, this is the case even if the subsystem is based on AI techniques.<br />
<br />
[Figure 1]<br />
<br />
AI-based systems follow a different lifecycle than do traditional systems. As shown in the general machine learning life cycle illustrated in Figure 2, V&V activities occur throughout the life cycle. In addition to requirements allocated to the AI subsystem (as is the case for conventional subsystems), there also may be requirements for data that flow up to the system from the AI subsystem.<br />
<br />
[Figure 2]<br />
<br />
== Characteristics of AI Leading to V&V Challenges ==<br />
Though some aspects of V&V for conventional systems can be used without modification, there are important characteristics of AI subsystems that lead to challenges in their verification and validation. In a survey of engineers, Ishikawa and Yoshioka (2019) identify attributes of machine learning that make the engineering of same difficult. According to the engineers surveyed, the top attributes with a summary of the engineers’ comments are:<br />
* ''Lack of an oracle'': It is difficult or impossible to clearly define the correctness criteria for system outputs or the right outputs for each individual input.<br />
* ''Imperfection'': It is intrinsically impossible to for an AI system to be 100% accurate.<br />
* ''Uncertain behavior for untested data'': There is high uncertainty about how the system will behave in response to untested input data, as evidenced by radical changes in behavior given slight changes in input (e.g., adversarial examples).<br />
* ''High dependency of behavior on training data'': System behavior is highly dependent on the training data.<br />
These attributes are characteristic of AI itself and can be generalized as follows:<br />
* Erosion of determinism<br />
* Unpredictability and unexplainability of individual outputs (Sculley et al., 2014)<br />
* Unanticipated, emergent behavior, and unintended consequences of algorithms<br />
* Complex decision making of the algorithms<br />
* Difficulty of maintaining consistency and weakness against slight changes in inputs (Goodfellow et al., 2015)<br />
<br />
== V&V Challenges of AI Systems ==<br />
<br />
=== Requirements ===<br />
Challenges with respect to AI requirements and AI requirements engineering are extensive and due in part to the practice by some to treat the AI element as a “black box” (Gunning 2016). Formal specification has been attempted and has shown to be difficult for those hard-to-formalize tasks and requires decisions on the use of quantitative or Boolean specifications and the use of data and formal requirements. The challenge here is to design effective methods to specify both desired and undesired properties of systems that use AI- or ML-based components (Seshia 2020). <br />
<br />
A taxonomy of AI requirements engineering challenges, outlined by Belani and colleagues (2019), is shown in Table 3. <br />
{| class="wikitable"<br />
|+Table 3: Requirements engineering for AI (RE4AI) taxonomy, mapping challenges to AI-related entities and requirements engineering activities (after (Belani et al., 2019))<br />
!RE4AI<br />
! colspan="3" |AI Related Entities<br />
|-<br />
|'''RE Activities'''<br />
|'''Data'''<br />
|'''Model'''<br />
|'''System'''<br />
|-<br />
|'''Elicitation'''<br />
|<nowiki>- Availability of large datasets</nowiki><br />
<br />
- Requirements analyst upgrade<br />
|<nowiki>- Lack of domain knowledge</nowiki><br />
<br />
- Undeclared consumers<br />
|<nowiki>- How to define problem /scope</nowiki><br />
<br />
- Regulation (e.g., ethics) not clear<br />
|-<br />
|'''Analysis'''<br />
|<nowiki>- Imbalanced datasets, silos</nowiki><br />
<br />
- Role: data scientist needed<br />
|<nowiki>- No trivial workflows</nowiki><br />
<br />
- Automation tools needed<br />
|<nowiki>- No integration of end results</nowiki><br />
<br />
- Role: business analyst upgrade<br />
|-<br />
|'''Specification'''<br />
|<nowiki>- Data labelling is costly, needed</nowiki><br />
<br />
- Role: data engineer needed<br />
|<nowiki>- No end-to-end pipeline support</nowiki><br />
<br />
- Minimum viable model useful<br />
|<nowiki>- Avoid design anti- patterns</nowiki><br />
<br />
- Cognitive / system architect needed<br />
|-<br />
|'''Validation'''<br />
|<nowiki>- Training data critical analysis</nowiki><br />
<br />
- Data dependencies<br />
|<nowiki>- Entanglement, CACE problem</nowiki><br />
<br />
- High scalability issues for ML<br />
|<nowiki>- Debugging, interpretability</nowiki><br />
<br />
- Hidden feedback loops<br />
|-<br />
|'''Management'''<br />
|<nowiki>- Experiment management</nowiki><br />
<br />
- No GORE-like method polished<br />
|<nowiki>- Difficult to log and reproduce</nowiki><br />
<br />
- DevOps role for AI needed<br />
|<nowiki>- IT resource limitations, costs</nowiki><br />
<br />
- Measuring performance<br />
|-<br />
|'''Documentation'''<br />
|<nowiki>- Data & model visualization</nowiki><br />
<br />
- Role: research scientist useful<br />
|<nowiki>- Datasets and model versions</nowiki><br />
<br />
- Education and training of staff<br />
|<nowiki>- Feedback from end-users</nowiki><br />
<br />
- Development method<br />
|-<br />
|'''All of the Above'''<br />
| colspan="3" | - Data privacy and data safety<br />
<br />
- Data dependencies<br />
|}<br />
CACE: change anything, change everything<br />
<br />
GORE: goal-oriented requirements engineering<br />
<br />
=== Data ===<br />
Data is the life-blood of AI capabilities given that it is used to train and evaluate AI models and produce their capabilities. Data quality attributes of importance to AI include accuracy, currency and timeliness, correctness, consistency, in addition to usability, security and privacy, accessibility, accountability, scalability, lack of bias and others. As noted above, the correctness of unsupervised methods is embedded in the training data and the environment.<br />
<br />
There is a question of coverage of the operational space by the training data. If the data does not adequately cover the operational space, the behavior of the AI component is questionable. However, there are no strong guarantees on when a data set it ‘large enough’. In addition, ‘large’ is not sufficient. The data must sufficiently cover the operational space.<br />
<br />
Another challenge with data is that of adversarial inputs. Szegedy et al. (2014) discovered that several ML models are vulnerable to adversarial examples. This has been shown many times on image classification software, however, adversarial attacks can be made against other AI tasks (e.g., natural language processing) and against techniques other than neural networks (typically used in image classification) such as reinforcement learning (e.g., reward hacking) models.<br />
<br />
=== Model ===<br />
Numerous V&V challenges arise in the model space, some of which are provided below.<br />
* ''Modeling the environment'': Unknown variables, determining the correct fidelity to model, modeling human behavior. The challenge problem is providing a systematic method of environment modeling that allows one to provide provable guarantees on the system’s behavior even when there is considerable uncertainty about the environment. (Seshia 2020)<br />
* ''Modeling learning systems'': Very high dimensional input space, very high dimensional parameter or state space, online adaptation/evolution, modeling context (Seshia 2020).<br />
* ''Design and verification of models and data'': data generation, quantitative verification, compositional reasoning, and compositional specification (Seshia 2020). The challenge is to develop techniques for compositional reasoning that do not rely on having complete compositional specifications (Seshia 2017).<br />
* ''Optimization strategy must balance between over- and under-specification''. One approach, instead of using distance (between predicted and actual results) measures, uses the cost of an erroneous result (e.g., an incorrect classification) as a criterion (Faria, 2018) (Varshney, 2017).<br />
* ''Online learning'': requires monitoring; need to ensure its exploration does not result in unsafe states.<br />
* ''Formal methods'': intractable state space explosion from complexity of the software and the system’s interaction with its environment, an issue with formal specifications.<br />
* ''Bias'' in algorithms from underrepresented or incomplete training data OR reliance on flawed information that reflects historical inequities. A biased algorithm may lead to decisions with collective disparate impact. Trade-off between fairness and accuracy in the mitigation of an algorithm’s bias.<br />
* ''Test coverage'': effective metrics for test coverage of AI components is an active area of research with several candidate metrics, but currently no clear best practice.<br />
<br />
=== Properties ===<br />
Assurance of several AI system properties is necessary to enable trust in the system, e.g., the system’s trustworthiness. This is a separate though necessary aspect of system dependability for AI systems. Some important properties are listed below and though extensive, are not comprehensive.<br />
* ''Accountability'': refers to the need of an AI system to be answerable for its decisions, actions and performance to users and others with whom the AI system interacts<br />
* ''Controllability'': refers to the ability of a human or other external agent to intervene in the AI system’s functioning<br />
* ''Explainability'': refers to the property of an AI system to express important factors influencing the AI system results or to provide details/reasons behind its functioning so that humans can understand<br />
* ''Interpretability'': refers to the degree to which a human can understand the cause of a decision (Miller 2017)<br />
* ''Reliability'': refers to the property of consistent intended behavior and results<br />
* ''Resilience'': refers to the ability of a system to recover operations quickly following an incident<br />
* ''Robustness'': refers to the ability of a system to maintain its level of performance when errors occur during execution and to maintain that level of performance given erroneous inputs and parameters<br />
* ''Safety'': refers to the freedom from unacceptable risk<br />
* ''Transparency'': refers to the need to describe, inspect and reproduce the mechanisms through which AI systems make decisions, communicating this to relevant stakeholders.<br />
<br />
== V&V Approaches and Standards ==<br />
<br />
=== V&V Approaches ===<br />
Prior to the proliferation of deep learning, research on V&V of neural networks touched on adaptation of available standards, such as the then-current IEEE Std 1012 (Software Verification and Validation) processes (Pullum et al. 2007), areas need to be augmented to enable V&V (Taylor 2006), and examples of V&V for high-assurance systems with neural networks (Schumann et al., 2010). While these books provide techniques and lessons learned, many of which remain relevant, additional challenges due to deep learning remain unsolved.<br />
<br />
One of the challenges is data validation. It is vital that the data upon which AI depends undergo V&V. Data quality attributes that are important for AI systems include accuracy, currency and timeliness, correctness, consistency, usability, security and privacy, accessibility, accountability, scalability, lack of bias, and coverage of the state space. Data validation steps can include file validation, import validation, domain validation, transformation validation, aggregation rule and business validation (Gao et al. 2011). <br />
<br />
There are several approaches to V&V of AI components, including formal methods (e.g., formal proofs, model checking, probabilistic verification), software testing, simulation-based testing and experiments. Some specific approaches are:<br />
* Metamorphic testing to test ML algorithms, addressing the oracle problem (Xie et al., 2011)<br />
* A ML test score consisting of tests for features and data, model development and ML infrastructure, and monitoring tests for ML (Breck et al., 2016)<br />
* Checking for inconsistency with desired behavior and systematically searching for worst-case outcomes when testing consistency with specifications.<br />
* Corroborative verification (Webster et al., 2020), in which several verification methods, working at different levels of abstraction and applied to the same AI component, may prove useful to verification of AI components of systems.<br />
* Testing against strong adversarial attacks (Useato, 2018); researchers have found that models may show robustness to weak adversarial attacks and show little to no accuracy to strong attacks (Athalye et al., 2018, Uesato et al., 2018, Carlini and Wagner, 2017).<br />
* Use of formal verification to prove that models are consistent with specifications, e.g., (Huang et al., 2017).<br />
<br />
* Assurance cases combining the results of V&V and other activities as evidence to support claims on the assurance of systems with AI components (Kelly and Weaver, 2004; Picardi et al. 2020).<br />
<br />
=== Standards ===<br />
Standards development organizations (SDO) are earnestly working to develop standards in AI, including the safety and trustworthiness of AI systems. Below are just a few of the SDOs and their AI standardization efforts.<br />
<br />
ISO is the first international SDO to set up an expert group to carry out standardization activities for AI. Subcommittee (SC) 42 is part of the joint technical committee ISO/IEC JTC 1. SC 42 has a working group on foundational standards to provide a framework and a common vocabulary, and several other working groups on computational approaches to and characteristics of AI systems, trustworthiness, use cases, applications, and big data. (https://www.iso.org/committee/6794475.html)<br />
<br />
The IEEE P7000 series of projects are part of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, launched in 2016. IEEE P7009, “Fail-Safe Design of Autonomous and Semi-Autonomous Systems” is one of 13 standards in the series. (https://standards.ieee.org/project/7009.html)<br />
<br />
Underwriters Laboratory has been involved in technology safety for 125 years and has released ANSI/UL 4600 “Standard for Safety for the Evaluation of Autonomous Products”. (<nowiki>https://ul.org/UL4600</nowiki>)<br />
<br />
The SAE G-34, Artificial Intelligence in Aviation, Committee is responsible for creating and maintaining SAE Technical Reports, including standards, on the implementation and certification aspects related to AI technologies inclusive of any on or off-board system for the safe operation of aerospace systems and aerospace vehicles. (https://www.sae.org/works/committeeHome.do?comtID=TEAG34)<br />
<br />
==References==<br />
<br />
===Works Cited===<br />
Belani, Hrvoje, Marin Vuković, and Željka Car. Requirements Engineering Challenges in Building AI-Based Complex Systems. 2019. IEEE 27<sup>th</sup> International Requirements Engineering Conference Workshops (REW).<br />
<br />
Breck, Eric, Shanqing Cai, Eric Nielsen, Michael Salib and D. Sculley. What’s your ML Test Score? A Rubric for ML Production Systems. 2016. 30<sup>th</sup> Conference on Neural Information Processing Systems (NIPS 2016), Barcelona Spain.<br />
<br />
Daume III, Hal, and Daniel Marcu. Domain adaptation for statistical classifiers. ''Journal of Artificial Intelligence Research'', 26:101–126, 2006.<br />
<br />
Dreossi, T., A. Donzé, S.A. Seshia. Compositional falsification of cyber-physical systems with machine learning components. In Barrett, C., M. Davies, T. Kahsai (eds.) NFM 2017. LNCS, vol. 10227, pp. 357-372. Springer, Cham (2017). <nowiki>https://doi.org/10.1007/978-3-319-57288-8_26</nowiki><br />
<br />
Faria, José M. Machine learning safety: An overview. In ''Proceedings of the 26th Safety-Critical Systems Symposium'', York, UK, February 2018.<br />
<br />
Farrell, M., Luckcuck, M., Fisher, M. Robotics and Integrated Formal Methods. Necessity Meets Opportunity. In: ''Integrated Formal Methods''. pp. 161-171. Springer (2018).<br />
<br />
Gao, Jerry, Chunli Xie, and Chuanqi Tao. 2016. Big Data Validation and Quality Assurance – Issues, Challenges and Needs. 2016 IEEE Symposium on Service-Oriented System Engineering (SOSE), Oxford, UK, 2016, pp. 433-441, doi: 10.1109/SOSE.2016.63.<br />
<br />
Gleirscher, M., Foster, S., Woodcock, J. New Opportunities for Integrated Formal Methods. ''ACM Computing Surveys'' 52(6), 1-36 (2020).<br />
<br />
Goodfellow, Ian, J. Shlens, C. Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR), May 2015.<br />
<br />
Gunning, D. Explainable Artificial Intelligence (XAI). In IJCAI 2016 Workshop on Deep Learning for Artificial Intelligence (DLAI), July 2016.<br />
<br />
Huang, X., M. Kwiatkowska, S. Wang, and M. Wu. Safety Verification of deep neural networks. In. Majumdar, R., and V. Kunčak (eds.) CAV 2017. LNCS, vol. 10426, pp. 3-29. Springer, Cham (2017). <nowiki>https://doi.org/10.1007/978-3-319-63387-9_1</nowiki><br />
<br />
Ishikawa, Fuyuki and Nobukazu Yoshioka. How do Engineers Perceive Difficulties in Engineering of Machine-Learning Systems? - Questionnaire Survey. 2019 IEEE/ACM Joint 7th International Workshop on Conducting Empirical Studies in Industry (CESI) and 6th International Workshop on Software Engineering Research and Industrial Practice (SER&IP) (2019)<br />
<br />
Jones, Cliff B. Tentative steps toward a development method for interfering programs. ''ACM Transactions on Programming Languages and Systems'' (TOPLAS), 5(4):596–619, 1983.<br />
<br />
Kelly, T., and R. Weaver. The goal structuring notation – a safety argument notation. In Dependable Systems and Networks 2004 Workshop on Assurance Cases, July 2004.<br />
<br />
Klein, G., Andronick, J., Fernandez, M., Kuz, I., Murray, T., Heiser, G. Formally verified software in the real world. ''Comm. of the ACM'' 61(10), 68-77 (2018).<br />
<br />
Kuwajima, Hiroshi, Hirotoshi Yasuoka, and Toshihiro Nakae. Engineering problems in machine learning systems. ''Machine Learning'' (2020) 109:1103–1126. <nowiki>https://doi.org/10.1007/s10994-020-05872-w</nowiki><br />
<br />
Lwakatare, Lucy Ellen, Aiswarya Raj, Ivica Crnkovic, Jan Bosch, and Helena Holmström Olsson. Large-scale machine learning systems in real-world industrial settings: A review of challenges and solutions. ''Information and Software Technology'' 127 (2020) 106368<br />
<br />
Luckcuck, M., Farrell, M., Dennis, L.A., Dixon, C., Fisher, M. Formal Specification and Verification of Autonomous Robotic Systems: A Survey. ''ACM Computing Surveys'' 52(5), 1-41 (2019).<br />
<br />
Marijan, Dusica and Arnaud Gotlieb. Software Testing for Machine Learning. The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) (2020)<br />
<br />
Miller, Tim. Explanation in artificial intelligence: Insights from the social sciences. arXiv Preprint arXiv:1706.07269. (2017).<br />
<br />
Pei, K., Y. Cao, J Yang, and S. Jana. DeepXplore: automated whitebox testing of deep learning systems. In The 26<sup>th</sup> Symposium on Operating Systems Principles (SOSP 2017), pp. 1-18, October 2017.<br />
<br />
Picardi, Chiara, Paterson, Colin, Hawkins, Richard David et al. (2020) Assurance Argument Patterns and Processes for Machine Learning in Safety-Related Systems. In: ''Proceedings of the Workshop on Artificial Intelligence Safety'' (SafeAI 2020). CEUR Workshop Proceedings, pp. 23-30.<br />
<br />
Pullum, Laura L., Brian Taylor, and Marjorie Darrah, ''Guidance for the Verification and Validation of Neural Networks'', IEEE Computer Society Press (Wiley), 2007.<br />
<br />
Rozier, K.Y. Specification: The Biggest Bottleneck in Formal Methods and Autonomy. In: ''Verified Software. Theories, Tools, and Experiments''. pp. 8-26. Springer (2016).<br />
<br />
Schumann, Johan, Pramod Gupta and Yan Liu. Application of neural networks in High Assurance Systems: A Survey. In ''Applications of Neural Networks in High Assurance Systems'', Studies in Computational Intelligence, pp. 1-19. Springer, Berlin, Heidelberg, 2010.<br />
<br />
Sculley, D., Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, Vinay Chaudhary, Michael Young, Jean-François Crespo, and Dan Dennison. Machine Learning: the high interest credit card of technical debt. In NIPS 2014 Workshop on Software Engineering for Machine Learning (SE4ML), December 2014.<br />
<br />
Seshia, Sanjit A. Compositional verification without compositional specification for learning-based systems. Technical Report UCB/EECS-2017-164, EECS Department, University of California, Berkeley, Nov 2017.<br />
<br />
Seshia, Sanjit A., Dorsa Sadigh, and S. Shankar Sastry. Towards Verified Artificial Intelligence. arXiv:1606.08514v4 [cs.AI] 23 Jul 2020.<br />
<br />
Szegedy, Christian, Zaremba, Wojciech, Sutskever, Ilya, Bruna, Joan, Erhan, Dumitru, Goodfellow, Ian J., and Fergus, Rob. Intriguing properties of neural networks. ICLR, abs/1312.6199, 2014b. URL <nowiki>http://arxiv.org/abs/1312.6199</nowiki>.<br />
<br />
Taylor, Brian, ed. ''Methods and Procedures for the Verification and Validation of Artificial Neural Networks'', Springer-Verlag, 2005.<br />
<br />
Thompson, E. (2007). ''Mind in life: Biology, phenomenology, and the sciences of mind''. Cambridge, MA: Harvard University Press.<br />
<br />
Tiwari, Ashish, Bruno Dutertre, Dejan Jovanović, Thomas de Candia, Patrick D. Lincoln, John Rushby, Dorsa Sadigh, and Sanjit Seshia. Safety envelope for security. In ''Proceedings of the'' ''3rd International Conference on High Confidence Networked Systems'' (HiCoNS), pp. 85-94, Berlin, Germany, April 2014. ACM.<br />
<br />
Uesato, Jonathan, O’Donoghue, Brendan, van den Oord, Aaron, Kohli, Pushmeet. Adversarial Risk and the Dangers of Evaluating Against Weak Attacks. ''Proceedings of the 35<sup>th</sup> International Conference on Machine Learning'', Stockholm, Sweden, PMLR 80, 2018.<br />
<br />
Varshney, Kush R., and Homa Alemzadeh. On the safety of machine learning: Cyber-physical systems, decision sciences, and data products. ''Big Data'', 5(3):246–255, 2017.<br />
<br />
Webster, M., Wester, D.G., Araiza-Illan, D., Dixon, C., Eder, K., Fisher, M., Pipe, A.G. A corroborative approach to verification and validation of human-robot teams. ''J. Robotics Research'' 39(1) (2020).<br />
<br />
Xie, Xiaoyuan, J.W.K. Ho, C. Murphy, G. Kaiser, B. Xu, and T.Y. Chen. 2011. “Testing and Validating Machine Learning Classifiers by Metamorphic Testing,” ''Journal of Software Testing'', April 1, 84(4): 544-558, doi:10.1016/j.jss.2010.11.920.<br />
<br />
Zhang, J., Li, J. Testing and verification of neural-network-based safety-critical control software: A systematic literature review. ''Information and Software Technology'' 123, 106296 (2020).<br />
<br />
Zhang, J.M., Harman, M., Ma, L., Liu, Y. Machine learning testing: Survey, landscapes and horizons. ''IEEE Transactions on Software Engineering''. 2020, doi: 10.1109/TSE.2019.2962027.<br />
<br />
===Primary References===<br />
<br />
Belani, Hrvoje, Marin Vuković, and Željka Car. Requirements Engineering Challenges in Building AI-Based Complex Systems. 2019. IEEE 27<sup>th</sup> International Requirements Engineering Conference Workshops (REW).<br />
<br />
Dutta, S., Jha, S., Sankaranarayanan, S., Tiwari, A. 2018. Output range analysis for deep feedforward neural networks. In: NASA Formal Methods. pp. 121-138.<br />
<br />
Gopinath, D., G. Katz, C. Pāsāreanu, and C. Barrett. 2018. DeepSafe: A Data-Driven Approach for Assessing Robustness of Neural Networks. In: ''ATVA''.<br />
<br />
Huang, X., M. Kwiatkowska, S. Wang and M. Wu. 2017. Safety Verification of Deep Neural Networks. Computer Aided Verification.<br />
<br />
Jha, S., V. Raman, A. Pinto, T. Sahai, and M. Francis. 2017. On Learning Sparse Boolean Formulae for Explaining AI Decisions, ''NASA Formal Methods''.<br />
<br />
Katz, G., C. Barrett, D. Dill, K. Julian, M. Kochenderfer. 2017. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks, <nowiki>https://arxiv.org/abs/1702.01135</nowiki>.<br />
<br />
Leofante, F., N. Narodytska, L. Pulina, A. Tacchella. 2018. Automated Verification of Neural Networks: Advances, Challenges and Perspectives, <nowiki>https://arxiv.org/abs/1805.09938</nowiki> Marijan, Dusica and Arnaud Gotlieb. Software Testing for Machine Learning. The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) (2020)<br />
<br />
Mirman, M., T. Gehr, and M. Vechev. 2018. Differentiable Abstract Interpretation for Provably Robust Neural Networks. ''International Conference on Machine Learning''.<br />
<br />
Pullum, Laura L., Brian Taylor, and Marjorie Darrah, ''Guidance for the Verification and Validation of Neural Networks'', IEEE Computer Society Press (Wiley), 2007.<br />
<br />
Seshia, Sanjit A., Dorsa Sadigh, and S. Shankar Sastry. Towards Verified Artificial Intelligence. arXiv:1606.08514v4 [cs.AI] 23 Jul 2020.<br />
<br />
Taylor, Brian, ed. ''Methods and Procedures for the Verification and Validation of Artificial Neural Networks'', Springer-Verlag, 2005.<br />
<br />
Xiang, W., P. Musau, A. Wild, D.M. Lopez, N. Hamilton, X. Yang, J. Rosenfeld, and T. Johnson. 2018. Verification for Machine Learning, Autonomy, and Neural Networks Survey. <nowiki>https://arxiv.org/abs/1810.01989</nowiki><br />
<br />
Zhang, J., Li, J. Testing and verification of neural-network-based safety-critical control software: A systematic literature review. ''Information and Software Technology'' 123, 106296 (2020).<br />
<br />
===Additional References===<br />
Jha, Sumit Kumar, Susmit Jha, Rickard Ewetz, Sunny Raj, Alvaro Velasquez, Laura L. Pullum, and Ananthram Swami. An Extension of Fano’s Inequality for Characterizing Model Susceptibility to Membership Inference Attacks. arXiv:2009.08097v1 [cs.LG] 17 Sep 2020.<br />
<br />
Sunny Raj, Mesut Ozdag, Steven Fernandes, Sumit Kumar Jha, Laura Pullum, “On the Susceptibility of Deep Neural Networks to Natural Perturbations,” ''AI Safety 2019'' (held in conjunction with IJCAI 2019 - International Joint Conference on Artificial Intelligence), Macao, China, August 2019.<br />
<br />
Ak, R., R. Ghosh, G. Shao, H. Reed, Y.-T. Lee, L.L. Pullum. “Verification-Validation and Uncertainty Quantification Methods for Data-Driven Models in Advanced Manufacturing,” ''ASME Verification and Validation Symposium'', Minneapolis, MN, 2018.<br />
<br />
Pullum, L.L., C.A. Steed, S.K. Jha, and A. Ramanathan. “Mathematically Rigorous Verification and Validation of Scientific Machine Learning,” ''DOE Scientific Machine Learning Workshop'', Bethesda, MD, Jan/Feb 2018.<br />
<br />
Ramanathan, A., L.L. Pullum, Zubir Husein, Sunny Raj, Neslisah Totosdagli, Sumanta Pattanaik, and S.K. Jha. 2017. “Adversarial attacks on computer vision algorithms using natural perturbations.” In ''2017 10th International Conference on Contemporary Computing (IC3)''. Noida, India. August 2017.<br />
<br />
Raj, S., L.L. Pullum, A. Ramanathan, and S.K. Jha. 2017. “Work in Progress: Testing Autonomous cyber-physical systems using fuzzing features derived from convolutional neural networks.” In ''ACM SIGBED International Conference on Embedded Software'' (EMSOFT). Seoul, South Korea. October 2017.<br />
<br />
Raj, S., L.L. Pullum, A. Ramanathan, and S.K. Jha, “SATYA: Defending against Adversarial Attacks using Statistical Hypothesis Testing,” in ''10th International Symposium on Foundations and Practice of Security'' (FPS 2017), Nancy, France. (Best Paper Award), 2017.<br />
<br />
Ramanathan, A., Pullum, L.L., S. Jha, et al. “Integrating Symbolic and Statistical Methods for Testing Intelligent Systems: Applications to Machine Learning and Computer Vision.” ''IEEE Design, Automation & Test in Europe''(DATE), 2016.<br />
<br />
Pullum, L.L., C. Rouff, R. Buskens, X. Cui, E. Vassiv, and M. Hinchey, “Verification of Adaptive Systems,” ''AIAA Infotech@Aerospace'' 2012, April 2012. <br />
<br />
Pullum, L.L., and C. Symons, “Failure Analysis of a Complex Learning Framework Incorporating Multi-Modal and Semi-Supervised Learning,” In ''IEEE Pacific Rim International Symposium on Dependable Computing''(PRDC 2011), 308-313, 2011. <br />
<br />
Haglich, P., C. Rouff, and L.L. Pullum, “Detecting Emergent Behaviors with Semi-Boolean Algebra,” ''Proceedings of AIAA Infotech @ Aerospace'', 2010. <br />
<br />
Pullum, L.L., Marjorie A. Darrah, and Brian J. Taylor, “Independent Verification and Validation of Neural Networks – Developing Practitioner Assistance,” ''Software Tech News'', July 2004.<br />
----<br />
<br />
<center>[[Socio-technical Systems|< Previous Article]] | [[Emerging Topics|Parent Article]] | [[Transitioning Systems Engineering to a MOdel-based Discipline|Next Article >]]</center><br />
<br />
<center>'''SEBoK v. 2.3, released 30 October 2020'''</center><br />
<br />
[[Category: Part 8]]<br />
[[Category:Topic]]<br />
[[Category:Emerging Topics]]</div>Hlehttps://sandbox.sebokwiki.org/index.php?title=Socio-technical_Systems&diff=60738Socio-technical Systems2021-04-23T04:39:48Z<p>Hle: Added author's name</p>
<hr />
<div>'''''Lead Author:''' Erika Palmer''<br />
----Though there are a few specific definitions, there are many ways in which the term “socio-technical system” is used depending on the specific engineering/scientific domain. There are also different approaches for considering socio-technical systems depending on the life cycle stage and the specific systems engineering challenge.<br />
<br />
== The Concept and Theory ==<br />
The concept of a socio-technical system describes the interrelationship between humans and machines, and the motivation behind developing research on socio-technical systems was to cope with theoretical and practical work environment problems in industry (Ropohl, 1999). <br />
<br />
Socio-technical systems theory has been developing over the past 60 years predominately focusing on new technology and work design (Davis et al., 2014). This theory has developed into socio-technical systems thinking, and research has concentrated in several key areas:<br />
* Human factors and ergonomics (Carayon, 2006) <br />
* Organizational design (Cherns, 1976) <br />
* System design (Clegg, 2000; van Eijnatten, 1998)<br />
* Information systems (Mumford, 2006)<br />
<br />
== A Design Approach ==<br />
As a design approach —socio-technical systems design (STSD)—socio-technical systems bring human, social, organizational and technical elements in the design of organizational systems (Baxter and Sommerville, 2011). While Baxter and Sommerville (2011) refer to computer-based systems in their definition of socio-technical systems thinking as a design approach, the generic term “technical system” is also applicable: “The underlying premise of socio-technical systems thinking is that system design should be a process that takes into account both social and technical factors that influence the functionality and usage of computer-based systems” (p.4).<br />
<br />
== Systems Engineering Context ==<br />
In a systems engineering context, it has been argued that all systems are socio-technical systems (Palmer, et al., 2019). However, socio-technical systems in a systems engineering context is not well defined though the topic has gained traction in recent years (Donaldson, 2017; Broniatowski, 2018). There are examples in systems engineering literature, where the term socio-technical systems is used to refer to a system where social and technical elements are relevant. These include studies of agent-based modeling of socio-technical systems (Heydari and Pennock, 2018), insurance systems as socio-critical systems (Yasui, 2011) and interdisciplinary systems engineering approaches to influence enterprise systems (Pennock and Rouse, 2016; Wang et al., 2018). <br />
<br />
Based on the work that the systems engineering community has produced thus far, the working definition of the term socio-technical systems in a systems engineering context is simply:<br />
<br />
''Socio-technical systems: Systems operating at the intersection of social and technical systems'' (Kroes et al., 2006)''.''<br />
<br />
== Modeling Sociotechnical Systems ==<br />
There is no “state of the practice” for how to model sociotechnical systems. There are, however, a few examples in systems engineering literature of how systems engineers could analyze these types of systems. Outside systems engineering/engineering literature, there is an ever-increasing number of examples of social system models. The modeling techniques found in these examples can be adapted to evaluate sociotechnical systems in a systems engineering context. Many of these are system dynamics models, and there is a journal dedicated to social system analysis, called the Journal for Artificial Societies and Social Simulation (JASS), which focuses on agent-based modeling. <br />
<br />
1) Qualitative Modeling<br />
* Insurance systems as socio-critical systems (Yasui, 2011)<br />
Yasui (2011) provides a new methodology to accommodate stakeholder goals in social system failures. This new methodology is a “soft” systems approach that brings together the Holon concept by Checkland and Scholes (1990) and the Vee Model.<br />
<br />
2) Agent-Based Modeling of Sociotechnical Systems in Systems Engineering<br />
* Agent-based modeling of sociotechnical systems (Heydari and Pennock, 2018)<br />
Heydari and Pennock (2018) illustrate how to support the design and governance of sociotechnical systems with agent-based modeling (ABM). Critically, they outline the difference between how ABM is used in physical, natural and social applications versus sociotechnical applications. <br />
* Interdisciplinary systems engineering approaches to influence enterprise systems (Pennock and Rouse, 2016) <br />
Pennock and Rouse (2016) not only provide how to define an enterprise as a system, but they also illustrate this with several ABM examples. They also highlight that when modeling sociotechnical systems versus traditional engineering systems, it is important to focused less on “control” and more on “influence.” <br />
<br />
3) Economic modeling <br />
* Social System Modeling Challenges (Wang et al., 2018)<br />
In their book, Social Systems Engineering, Wang et al. (2018) provide an overview of not only modeling and its challenges in evaluating social systems, but they also give insight into how social system modeling is approached in economics. <br />
<br />
4) System Dynamics Modeling of Social Systems for Adaptation in an SE Sociotechnical Context<br />
* Social policy (Palmer, 2017)<br />
Palmer (2017) provides an overview of social systems in a systems engineering context, and uses system dynamics modeling of pension and sick leave policy systems to illustrate how to use systems engineering methods for social policy.<br />
* Social Systems Engineering (García‐Díaz and Olaya, 2018)<br />
García‐Díaz and Olaya (2018) give not only a thorough overview in their book (called Social Systems Engineering) of social systems and various qualitative and quantitative modeling types, but they also highlight participatory system dynamics modeling (stakeholder-led system design).<br />
* Health care (Homer and Hirsch, 2006)<br />
As there is increasing attention in the systems engineering community towards health care technology, Homer and Hirsch’s (2006) paper on system dynamics modeling of public health gives a basis for how to model social systems in this domain. For example, chronic disease prevention, disease outcomes, health and risk behaviors, environmental factors, and health-related resources and delivery systems. <br />
<br />
==References==<br />
<br />
===Works Cited===<br />
<br />
Baxter, G. and Sommerville, I., 2011. Socio-technical systems: From design methods to systems engineering. Interacting with computers, 23(1), pp.4-17.<br />
<br />
Broniatowski, DA, 2018, ‘Building the tower without climbing it: Progress in engineering systems’, Systems Engineering, 21 (3), 259-81.<br />
<br />
Carayon, P., 2006. ‘Human factors of complex sociotechnical systems.’ Applied ergonomics, 37(4), pp.525-535.<br />
<br />
Checkland, P. and Scholes, J. 1990. ‘Soft systems methodology in action.’ Wiley: UK.<br />
<br />
Cherns, A., 1976. The principles of sociotechnical design. Human relations, 29(8), pp.783-792.<br />
<br />
Clegg, C.W., 2000. Sociotechnical principles for system design. Applied ergonomics, 31(5), pp.463-477.<br />
<br />
Davis, M.C., Challenger, R., Jayewardene, D.N. and Clegg, C.W., 2014. Advancing socio-technical systems thinking: A call for bravery. Applied ergonomics, 45(2), pp.171-180.<br />
<br />
Donaldson, W, 2017. ‘In Praise of the “Ologies”: A Discussion of and Framework for Using Soft Skills to Sense and Influence Emergent Behaviors in Sociotechnical Systems’, Systems Engineering, 20 (5), 467-78.<br />
<br />
Heydari, B and Pennock, MJ, 2018, ‘Guiding the behavior of sociotechnical systems: The role of agent‐based modeling’, Systems Engineering, 21 (3),210-26.<br />
<br />
Homer, JB and Hirsch, GB, 2006, ‘System dynamics modeling for public health: background and opportunities’, American journal of public health, 96 (3), 452-458.<br />
<br />
Kroes, P, Franssen, M, Poel, IVD and Ottens M, 2006, ‘Treating socio‐technical systems as engineering systems: some conceptual problems’, Systems Research and Behavioral Science, 23 (6), 803-814.<br />
<br />
Palmer, E, 2017, ‘Systems Engineering Applied to Evaluate Social Systems: Analyzing Systemic Challenges to the Norwegian Welfare State.’ University of Bergen: Norway.<br />
<br />
Palmer, E, Presland, I, Rhodes, D, Olaya, C, Haskins, C, Glazner, C, 2019, ‘Social Systems-Where Are We and Where Do We Dare to Go?’ Panel Discussion. 29th Annual INCOSE Symposium, Orlando, Florida<br />
<br />
Pennock, MJ and Rouse WB, 2016, ‘The epistemology of enterprises’, Systems Engineering, 19 (1), 24-43.<br />
<br />
Ropohl, G., 1999. Philosophy of socio-technical systems. Society for Philosophy and Technology Quarterly Electronic Journal, 4(3), pp.186-194.<br />
<br />
van Eijnatten, F.M., 1998. Developments in socio-technical systems design (STSD). P. J. Drenth, H. Thierry, & CJ de Wolff, Handbook of Work and Organizational Psychology, 2, pp.61-80.<br />
<br />
Wang, H, Li, S and Wang, Q, 2018. ‘Introduction to Social Systems Engineering.’ Springer: US.<br />
<br />
Yasui, T, 2011, ‘A new systems engineering approach for a Socio‐Critical System: A case study of claims‐payment failures of Japan's insurance industry,’ Systems Engineering, 14 (4), 349-63<br />
<br />
===Primary References===<br />
<br />
<br />
===Additional References===<br />
<br />
----<br />
<br />
<center>[[Emerging Topics|< Previous Article]] | [[Emerging Topics|Parent Article]] | [[Systems Engineering and Artificial Intelligence|Next Article >]]</center><br />
<br />
<center>'''SEBoK v. 2.3, released 30 October 2020'''</center><br />
<br />
[[Category: Part 8]]<br />
[[Category:Topic]]<br />
[[Category:Emerging Topics]]</div>Hlehttps://sandbox.sebokwiki.org/index.php?title=Emerging_Knowledge&diff=60737Emerging Knowledge2021-04-23T04:35:44Z<p>Hle: Added authors' names</p>
<hr />
<div>-----<br />
'''''Lead Authors:''' Robert Cloutier, Daniel DeLaurentis, Ha Phuong Le''<br />
-----<br />
<br />
Like other portions of the SEBoK, the notion and content of Part 8 is evolving. The Knowledge Areas (KAs) or Sections in Part 8 are based on the topics or themes that see Emerging Knowledge. For each KA, the Emerging Knowledge consists of two aspects: Emerging Topics and Emerging Research. <br />
<br />
[[File:SEBoK_Context_Diagram_Inner_P8_Ifezue_Obiako.png|centre|thumb|500x500px|'''Figure 1. SEBoK Part 8 in context (SEBoK Original).''' For more detail see [[Structure of the SEBoK]]]]<br />
<br />
==Scope and Purpose== <br />
While the practice and need for systems engineering began appearing in journals from 1950 onward, the practice currently seems to be gaining momentum in most engineering and even non-engineering circles.<br />
<br />
The classically trained systems engineers of the 1970s and even 1980s are faced with a C note shift in thinking brought on by the rapid advance of the software centricity of our systems, cybersecurity, agent-based, object-oriented, and model-based practices. These emerging practices bring their own methods and tools. Hall (1962, p. 5) may have been prescient when he wrote “It is hard to say whether increasing complexity is the cause or the effect of man's effort to cope with his expanding environment. In either case a central feature of the trend has been the development of large and very complex systems which tie together modern society. These systems include abstract or non-physical systems, such as government and the economic system.”<br />
<br />
These changes and the rate of change are causing systems engineering to evolve. Some of the practices may not even be recognizable to classically trained systems engineers. This Part of the SEBoK is intended to introduce some of the more significant emerging changes to systems engineering.<br />
As topics discussed in this Part evolve and become mainstream, they will be moved into the appropriate Part of the SEBoK.<br />
<br />
SoSE provides examples in recent times of emerging topic from SE community, generated emerging research, ultimately resulting in a foundational body of knowledge that continues to expand. A recent article describing this evolution from emerging topic to solution is now resident in Part 4 - [[Socio-Technical Features of Systems of Systems]].<br />
<br />
==Overview of Emerging Topics==<br />
''See further: [[Emerging Topics]]''<br />
<br />
The Emerging Topics section is meant to inform the reader on the more significant and emerging changes to the practice of systems engineering. Examples of these emerging topics include:<br />
<br />
* What is the potential to change systems engineering processes or the ways in which we perform systems engineering?<br />
*How will the development of artificial intelligence impact systems engineering?<br />
**Will AI change the way we think of systems architecture? <br />
**How will we perform V&V of an AI system? <br />
*How will the push towards vertically integrated digital engineering influence systems engineering?<br />
*How are social features becoming more tightly connected to technical features of systems, and how is the modeling of socio-technical systems infusing into practice?<br />
<br />
==Overview of Emerging Research==<br />
''See further: [[Emerging Research]]'' <br />
<br />
As these emerging topics gain visibility, researchers will begin to investigate them. Corporate R&D may do early work, but academia and government will formalize this research. The Emerging Research section is a place to gather the references to this disparate work into a single repository to better inform systems engineers working on related topics. The references are collected from the following sources: <br />
* PhD dissertations<br />
* INCOSE publications and events <br />
* IEEE publications and events<br />
* Research funded by National Science Foundation (NSF) – Engineering Design and Systems Engineering (EDSE)<br />
* Research funded by Systems Engineering Research Center (SERC)<br />
<br />
==References==<br />
===Works Cited===<br />
Hall, Arthur D. (1962). ''A Methodology for Systems Engineering.'' New York, NY, USA: Van Nostrand.<br />
<br />
===Additional References===<br />
Engstrom, E.W. (1957). "Systems engineering: A growing concept," in Electrical Engineering, vol. 76, no. 2, pp. 113-116, Feb. 1957, doi: 10.1109/EE.1957.6442968.<br />
<br />
Goode, H. Herbert., Machol, R. Engel. (1957). ''System Engineering: An Introduction to the Design of Large-Scale Systems.'' New York, NY, USA: McGraw-Hill.<br />
<br />
Kelly, Mervin J. (1950). “The Bell Telephone Laboratories—An example of an institute of creative technology”. Proceedings of the Royal Society B. Vol. 137, Issue 889. https://doi.org/10.1098/rspb.1950.0050.<br />
<br />
<center>[[Singapore Water Management|< Previous Article]] | [[SEBoK Table of Contents|Parent Article]] | [[Emerging Topics|Next Article >]]</center><br />
<br />
<center>'''SEBoK v. 2.3, released 30 October 2020'''</center><br />
<br />
[[Category: Part 8]]<br />
[[Category:Part]]</div>Hlehttps://sandbox.sebokwiki.org/index.php?title=Verification_and_Validation_of_Systems_in_Which_AI_is_a_Key_Element&diff=60736Verification and Validation of Systems in Which AI is a Key Element2021-04-23T04:32:16Z<p>Hle: AI article added without figures</p>
<hr />
<div>Many systems are being considered in which artificial intelligence (AI) will be a key element. Failure of an AI element can lead to system failure (Dreossi et al 2017), hence the need for AI verification and validation (V&V). The element(s) containing AI capabilities is treated as a subsystem and V&V is conducted on that subsystem and its interfaces with other elements of the system under study, just as V&V would be conducted on other subsystems. That is, the high-level definitions of verification and of validation do not change for systems containing one or more AI elements.<br />
<br />
However, AI V&V challenges require approaches and solutions beyond those for conventional or traditional (those without AI elements) systems. This article provides an overview of how machine learning components/subsystems “fit” in the systems engineering framework, identifies characteristics of AI subsystems that create challenges in their V&V, illuminates those challenges, and provides some potential solutions while noting open or continuing areas of research in the V&V of AI subsystems.<br />
<br />
== Overview of V&V for AI-based Systems ==<br />
Conventional systems are engineered via 3 overarching phases, namely, requirements, design and V&V. These phases are applied to each subsystem and to the system under study. As shown in Figure 1, this is the case even if the subsystem is based on AI techniques.<br />
<br />
[Figure 1]<br />
<br />
AI-based systems follow a different lifecycle than do traditional systems. As shown in the general machine learning life cycle illustrated in Figure 2, V&V activities occur throughout the life cycle. In addition to requirements allocated to the AI subsystem (as is the case for conventional subsystems), there also may be requirements for data that flow up to the system from the AI subsystem.<br />
<br />
[Figure 2]<br />
<br />
== Characteristics of AI Leading to V&V Challenges ==<br />
Though some aspects of V&V for conventional systems can be used without modification, there are important characteristics of AI subsystems that lead to challenges in their verification and validation. In a survey of engineers, Ishikawa and Yoshioka (2019) identify attributes of machine learning that make the engineering of same difficult. According to the engineers surveyed, the top attributes with a summary of the engineers’ comments are:<br />
* ''Lack of an oracle'': It is difficult or impossible to clearly define the correctness criteria for system outputs or the right outputs for each individual input.<br />
* ''Imperfection'': It is intrinsically impossible to for an AI system to be 100% accurate.<br />
* ''Uncertain behavior for untested data'': There is high uncertainty about how the system will behave in response to untested input data, as evidenced by radical changes in behavior given slight changes in input (e.g., adversarial examples).<br />
* ''High dependency of behavior on training data'': System behavior is highly dependent on the training data.<br />
These attributes are characteristic of AI itself and can be generalized as follows:<br />
* Erosion of determinism<br />
* Unpredictability and unexplainability of individual outputs (Sculley et al., 2014)<br />
* Unanticipated, emergent behavior, and unintended consequences of algorithms<br />
* Complex decision making of the algorithms<br />
* Difficulty of maintaining consistency and weakness against slight changes in inputs (Goodfellow et al., 2015)<br />
<br />
== V&V Challenges of AI Systems ==<br />
<br />
=== Requirements ===<br />
Challenges with respect to AI requirements and AI requirements engineering are extensive and due in part to the practice by some to treat the AI element as a “black box” (Gunning 2016). Formal specification has been attempted and has shown to be difficult for those hard-to-formalize tasks and requires decisions on the use of quantitative or Boolean specifications and the use of data and formal requirements. The challenge here is to design effective methods to specify both desired and undesired properties of systems that use AI- or ML-based components (Seshia 2020). <br />
<br />
A taxonomy of AI requirements engineering challenges, outlined by Belani and colleagues (2019), is shown in Table 3. <br />
{| class="wikitable"<br />
|+Table 3: Requirements engineering for AI (RE4AI) taxonomy, mapping challenges to AI-related entities and requirements engineering activities (after (Belani et al., 2019))<br />
!RE4AI<br />
! colspan="3" |AI Related Entities<br />
|-<br />
|'''RE Activities'''<br />
|'''Data'''<br />
|'''Model'''<br />
|'''System'''<br />
|-<br />
|'''Elicitation'''<br />
|<nowiki>- Availability of large datasets</nowiki><br />
<br />
- Requirements analyst upgrade<br />
|<nowiki>- Lack of domain knowledge</nowiki><br />
<br />
- Undeclared consumers<br />
|<nowiki>- How to define problem /scope</nowiki><br />
<br />
- Regulation (e.g., ethics) not clear<br />
|-<br />
|'''Analysis'''<br />
|<nowiki>- Imbalanced datasets, silos</nowiki><br />
<br />
- Role: data scientist needed<br />
|<nowiki>- No trivial workflows</nowiki><br />
<br />
- Automation tools needed<br />
|<nowiki>- No integration of end results</nowiki><br />
<br />
- Role: business analyst upgrade<br />
|-<br />
|'''Specification'''<br />
|<nowiki>- Data labelling is costly, needed</nowiki><br />
<br />
- Role: data engineer needed<br />
|<nowiki>- No end-to-end pipeline support</nowiki><br />
<br />
- Minimum viable model useful<br />
|<nowiki>- Avoid design anti- patterns</nowiki><br />
<br />
- Cognitive / system architect needed<br />
|-<br />
|'''Validation'''<br />
|<nowiki>- Training data critical analysis</nowiki><br />
<br />
- Data dependencies<br />
|<nowiki>- Entanglement, CACE problem</nowiki><br />
<br />
- High scalability issues for ML<br />
|<nowiki>- Debugging, interpretability</nowiki><br />
<br />
- Hidden feedback loops<br />
|-<br />
|'''Management'''<br />
|<nowiki>- Experiment management</nowiki><br />
<br />
- No GORE-like method polished<br />
|<nowiki>- Difficult to log and reproduce</nowiki><br />
<br />
- DevOps role for AI needed<br />
|<nowiki>- IT resource limitations, costs</nowiki><br />
<br />
- Measuring performance<br />
|-<br />
|'''Documentation'''<br />
|<nowiki>- Data & model visualization</nowiki><br />
<br />
- Role: research scientist useful<br />
|<nowiki>- Datasets and model versions</nowiki><br />
<br />
- Education and training of staff<br />
|<nowiki>- Feedback from end-users</nowiki><br />
<br />
- Development method<br />
|-<br />
|'''All of the Above'''<br />
| colspan="3" |- Data privacy and data safety<br />
<br />
- Data dependencies<br />
|}<br />
CACE: change anything, change everything<br />
<br />
GORE: goal-oriented requirements engineering<br />
<br />
=== Data ===<br />
Data is the life-blood of AI capabilities given that it is used to train and evaluate AI models and produce their capabilities. Data quality attributes of importance to AI include accuracy, currency and timeliness, correctness, consistency, in addition to usability, security and privacy, accessibility, accountability, scalability, lack of bias and others. As noted above, the correctness of unsupervised methods is embedded in the training data and the environment.<br />
<br />
There is a question of coverage of the operational space by the training data. If the data does not adequately cover the operational space, the behavior of the AI component is questionable. However, there are no strong guarantees on when a data set it ‘large enough’. In addition, ‘large’ is not sufficient. The data must sufficiently cover the operational space.<br />
<br />
Another challenge with data is that of adversarial inputs. Szegedy et al. (2014) discovered that several ML models are vulnerable to adversarial examples. This has been shown many times on image classification software, however, adversarial attacks can be made against other AI tasks (e.g., natural language processing) and against techniques other than neural networks (typically used in image classification) such as reinforcement learning (e.g., reward hacking) models.<br />
<br />
=== Model ===<br />
Numerous V&V challenges arise in the model space, some of which are provided below.<br />
* ''Modeling the environment'': Unknown variables, determining the correct fidelity to model, modeling human behavior. The challenge problem is providing a systematic method of environment modeling that allows one to provide provable guarantees on the system’s behavior even when there is considerable uncertainty about the environment. (Seshia 2020)<br />
* ''Modeling learning systems'': Very high dimensional input space, very high dimensional parameter or state space, online adaptation/evolution, modeling context (Seshia 2020).<br />
* ''Design and verification of models and data'': data generation, quantitative verification, compositional reasoning, and compositional specification (Seshia 2020). The challenge is to develop techniques for compositional reasoning that do not rely on having complete compositional specifications (Seshia 2017).<br />
* ''Optimization strategy must balance between over- and under-specification''. One approach, instead of using distance (between predicted and actual results) measures, uses the cost of an erroneous result (e.g., an incorrect classification) as a criterion (Faria, 2018) (Varshney, 2017).<br />
* ''Online learning'': requires monitoring; need to ensure its exploration does not result in unsafe states.<br />
* ''Formal methods'': intractable state space explosion from complexity of the software and the system’s interaction with its environment, an issue with formal specifications.<br />
* ''Bias'' in algorithms from underrepresented or incomplete training data OR reliance on flawed information that reflects historical inequities. A biased algorithm may lead to decisions with collective disparate impact. Trade-off between fairness and accuracy in the mitigation of an algorithm’s bias.<br />
* ''Test coverage'': effective metrics for test coverage of AI components is an active area of research with several candidate metrics, but currently no clear best practice.<br />
<br />
=== Properties ===<br />
Assurance of several AI system properties is necessary to enable trust in the system, e.g., the system’s trustworthiness. This is a separate though necessary aspect of system dependability for AI systems. Some important properties are listed below and though extensive, are not comprehensive.<br />
* ''Accountability'': refers to the need of an AI system to be answerable for its decisions, actions and performance to users and others with whom the AI system interacts<br />
* ''Controllability'': refers to the ability of a human or other external agent to intervene in the AI system’s functioning<br />
* ''Explainability'': refers to the property of an AI system to express important factors influencing the AI system results or to provide details/reasons behind its functioning so that humans can understand<br />
* ''Interpretability'': refers to the degree to which a human can understand the cause of a decision (Miller 2017)<br />
* ''Reliability'': refers to the property of consistent intended behavior and results<br />
* ''Resilience'': refers to the ability of a system to recover operations quickly following an incident<br />
* ''Robustness'': refers to the ability of a system to maintain its level of performance when errors occur during execution and to maintain that level of performance given erroneous inputs and parameters<br />
* ''Safety'': refers to the freedom from unacceptable risk<br />
* ''Transparency'': refers to the need to describe, inspect and reproduce the mechanisms through which AI systems make decisions, communicating this to relevant stakeholders.<br />
<br />
== V&V Approaches and Standards ==<br />
<br />
=== V&V Approaches ===<br />
Prior to the proliferation of deep learning, research on V&V of neural networks touched on adaptation of available standards, such as the then-current IEEE Std 1012 (Software Verification and Validation) processes (Pullum et al. 2007), areas need to be augmented to enable V&V (Taylor 2006), and examples of V&V for high-assurance systems with neural networks (Schumann et al., 2010). While these books provide techniques and lessons learned, many of which remain relevant, additional challenges due to deep learning remain unsolved.<br />
<br />
One of the challenges is data validation. It is vital that the data upon which AI depends undergo V&V. Data quality attributes that are important for AI systems include accuracy, currency and timeliness, correctness, consistency, usability, security and privacy, accessibility, accountability, scalability, lack of bias, and coverage of the state space. Data validation steps can include file validation, import validation, domain validation, transformation validation, aggregation rule and business validation (Gao et al. 2011). <br />
<br />
There are several approaches to V&V of AI components, including formal methods (e.g., formal proofs, model checking, probabilistic verification), software testing, simulation-based testing and experiments. Some specific approaches are:<br />
* Metamorphic testing to test ML algorithms, addressing the oracle problem (Xie et al., 2011)<br />
* A ML test score consisting of tests for features and data, model development and ML infrastructure, and monitoring tests for ML (Breck et al., 2016)<br />
* Checking for inconsistency with desired behavior and systematically searching for worst-case outcomes when testing consistency with specifications.<br />
* Corroborative verification (Webster et al., 2020), in which several verification methods, working at different levels of abstraction and applied to the same AI component, may prove useful to verification of AI components of systems.<br />
* Testing against strong adversarial attacks (Useato, 2018); researchers have found that models may show robustness to weak adversarial attacks and show little to no accuracy to strong attacks (Athalye et al., 2018, Uesato et al., 2018, Carlini and Wagner, 2017).<br />
* Use of formal verification to prove that models are consistent with specifications, e.g., (Huang et al., 2017).<br />
<br />
* Assurance cases combining the results of V&V and other activities as evidence to support claims on the assurance of systems with AI components (Kelly and Weaver, 2004; Picardi et al. 2020).<br />
<br />
=== Standards ===<br />
Standards development organizations (SDO) are earnestly working to develop standards in AI, including the safety and trustworthiness of AI systems. Below are just a few of the SDOs and their AI standardization efforts.<br />
<br />
ISO is the first international SDO to set up an expert group to carry out standardization activities for AI. Subcommittee (SC) 42 is part of the joint technical committee ISO/IEC JTC 1. SC 42 has a working group on foundational standards to provide a framework and a common vocabulary, and several other working groups on computational approaches to and characteristics of AI systems, trustworthiness, use cases, applications, and big data. (https://www.iso.org/committee/6794475.html)<br />
<br />
The IEEE P7000 series of projects are part of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, launched in 2016. IEEE P7009, “Fail-Safe Design of Autonomous and Semi-Autonomous Systems” is one of 13 standards in the series. (https://standards.ieee.org/project/7009.html)<br />
<br />
Underwriters Laboratory has been involved in technology safety for 125 years and has released ANSI/UL 4600 “Standard for Safety for the Evaluation of Autonomous Products”. (<nowiki>https://ul.org/UL4600</nowiki>)<br />
<br />
The SAE G-34, Artificial Intelligence in Aviation, Committee is responsible for creating and maintaining SAE Technical Reports, including standards, on the implementation and certification aspects related to AI technologies inclusive of any on or off-board system for the safe operation of aerospace systems and aerospace vehicles. (https://www.sae.org/works/committeeHome.do?comtID=TEAG34)<br />
<br />
==References==<br />
<br />
===Works Cited===<br />
Belani, Hrvoje, Marin Vuković, and Željka Car. Requirements Engineering Challenges in Building AI-Based Complex Systems. 2019. IEEE 27<sup>th</sup> International Requirements Engineering Conference Workshops (REW).<br />
<br />
Breck, Eric, Shanqing Cai, Eric Nielsen, Michael Salib and D. Sculley. What’s your ML Test Score? A Rubric for ML Production Systems. 2016. 30<sup>th</sup> Conference on Neural Information Processing Systems (NIPS 2016), Barcelona Spain.<br />
<br />
Daume III, Hal, and Daniel Marcu. Domain adaptation for statistical classifiers. ''Journal of Artificial Intelligence Research'', 26:101–126, 2006.<br />
<br />
Dreossi, T., A. Donzé, S.A. Seshia. Compositional falsification of cyber-physical systems with machine learning components. In Barrett, C., M. Davies, T. Kahsai (eds.) NFM 2017. LNCS, vol. 10227, pp. 357-372. Springer, Cham (2017). <nowiki>https://doi.org/10.1007/978-3-319-57288-8_26</nowiki><br />
<br />
Faria, José M. Machine learning safety: An overview. In ''Proceedings of the 26th Safety-Critical Systems Symposium'', York, UK, February 2018.<br />
<br />
Farrell, M., Luckcuck, M., Fisher, M. Robotics and Integrated Formal Methods. Necessity Meets Opportunity. In: ''Integrated Formal Methods''. pp. 161-171. Springer (2018).<br />
<br />
Gao, Jerry, Chunli Xie, and Chuanqi Tao. 2016. Big Data Validation and Quality Assurance – Issues, Challenges and Needs. 2016 IEEE Symposium on Service-Oriented System Engineering (SOSE), Oxford, UK, 2016, pp. 433-441, doi: 10.1109/SOSE.2016.63.<br />
<br />
Gleirscher, M., Foster, S., Woodcock, J. New Opportunities for Integrated Formal Methods. ''ACM Computing Surveys'' 52(6), 1-36 (2020).<br />
<br />
Goodfellow, Ian, J. Shlens, C. Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR), May 2015.<br />
<br />
Gunning, D. Explainable Artificial Intelligence (XAI). In IJCAI 2016 Workshop on Deep Learning for Artificial Intelligence (DLAI), July 2016.<br />
<br />
Huang, X., M. Kwiatkowska, S. Wang, and M. Wu. Safety Verification of deep neural networks. In. Majumdar, R., and V. Kunčak (eds.) CAV 2017. LNCS, vol. 10426, pp. 3-29. Springer, Cham (2017). <nowiki>https://doi.org/10.1007/978-3-319-63387-9_1</nowiki><br />
<br />
Ishikawa, Fuyuki and Nobukazu Yoshioka. How do Engineers Perceive Difficulties in Engineering of Machine-Learning Systems? - Questionnaire Survey. 2019 IEEE/ACM Joint 7th International Workshop on Conducting Empirical Studies in Industry (CESI) and 6th International Workshop on Software Engineering Research and Industrial Practice (SER&IP) (2019)<br />
<br />
Jones, Cliff B. Tentative steps toward a development method for interfering programs. ''ACM Transactions on Programming Languages and Systems'' (TOPLAS), 5(4):596–619, 1983.<br />
<br />
Kelly, T., and R. Weaver. The goal structuring notation – a safety argument notation. In Dependable Systems and Networks 2004 Workshop on Assurance Cases, July 2004.<br />
<br />
Klein, G., Andronick, J., Fernandez, M., Kuz, I., Murray, T., Heiser, G. Formally verified software in the real world. ''Comm. of the ACM'' 61(10), 68-77 (2018).<br />
<br />
Kuwajima, Hiroshi, Hirotoshi Yasuoka, and Toshihiro Nakae. Engineering problems in machine learning systems. ''Machine Learning'' (2020) 109:1103–1126. <nowiki>https://doi.org/10.1007/s10994-020-05872-w</nowiki><br />
<br />
Lwakatare, Lucy Ellen, Aiswarya Raj, Ivica Crnkovic, Jan Bosch, and Helena Holmström Olsson. Large-scale machine learning systems in real-world industrial settings: A review of challenges and solutions. ''Information and Software Technology'' 127 (2020) 106368<br />
<br />
Luckcuck, M., Farrell, M., Dennis, L.A., Dixon, C., Fisher, M. Formal Specification and Verification of Autonomous Robotic Systems: A Survey. ''ACM Computing Surveys'' 52(5), 1-41 (2019).<br />
<br />
Marijan, Dusica and Arnaud Gotlieb. Software Testing for Machine Learning. The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) (2020)<br />
<br />
Miller, Tim. Explanation in artificial intelligence: Insights from the social sciences. arXiv Preprint arXiv:1706.07269. (2017).<br />
<br />
Pei, K., Y. Cao, J Yang, and S. Jana. DeepXplore: automated whitebox testing of deep learning systems. In The 26<sup>th</sup> Symposium on Operating Systems Principles (SOSP 2017), pp. 1-18, October 2017.<br />
<br />
Picardi, Chiara, Paterson, Colin, Hawkins, Richard David et al. (2020) Assurance Argument Patterns and Processes for Machine Learning in Safety-Related Systems. In: ''Proceedings of the Workshop on Artificial Intelligence Safety'' (SafeAI 2020). CEUR Workshop Proceedings, pp. 23-30.<br />
<br />
Pullum, Laura L., Brian Taylor, and Marjorie Darrah, ''Guidance for the Verification and Validation of Neural Networks'', IEEE Computer Society Press (Wiley), 2007.<br />
<br />
Rozier, K.Y. Specification: The Biggest Bottleneck in Formal Methods and Autonomy. In: ''Verified Software. Theories, Tools, and Experiments''. pp. 8-26. Springer (2016).<br />
<br />
Schumann, Johan, Pramod Gupta and Yan Liu. Application of neural networks in High Assurance Systems: A Survey. In ''Applications of Neural Networks in High Assurance Systems'', Studies in Computational Intelligence, pp. 1-19. Springer, Berlin, Heidelberg, 2010.<br />
<br />
Sculley, D., Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, Vinay Chaudhary, Michael Young, Jean-François Crespo, and Dan Dennison. Machine Learning: the high interest credit card of technical debt. In NIPS 2014 Workshop on Software Engineering for Machine Learning (SE4ML), December 2014.<br />
<br />
Seshia, Sanjit A. Compositional verification without compositional specification for learning-based systems. Technical Report UCB/EECS-2017-164, EECS Department, University of California, Berkeley, Nov 2017.<br />
<br />
Seshia, Sanjit A., Dorsa Sadigh, and S. Shankar Sastry. Towards Verified Artificial Intelligence. arXiv:1606.08514v4 [cs.AI] 23 Jul 2020.<br />
<br />
Szegedy, Christian, Zaremba, Wojciech, Sutskever, Ilya, Bruna, Joan, Erhan, Dumitru, Goodfellow, Ian J., and Fergus, Rob. Intriguing properties of neural networks. ICLR, abs/1312.6199, 2014b. URL <nowiki>http://arxiv.org/abs/1312.6199</nowiki>.<br />
<br />
Taylor, Brian, ed. ''Methods and Procedures for the Verification and Validation of Artificial Neural Networks'', Springer-Verlag, 2005.<br />
<br />
Thompson, E. (2007). ''Mind in life: Biology, phenomenology, and the sciences of mind''. Cambridge, MA: Harvard University Press.<br />
<br />
Tiwari, Ashish, Bruno Dutertre, Dejan Jovanović, Thomas de Candia, Patrick D. Lincoln, John Rushby, Dorsa Sadigh, and Sanjit Seshia. Safety envelope for security. In ''Proceedings of the'' ''3rd International Conference on High Confidence Networked Systems'' (HiCoNS), pp. 85-94, Berlin, Germany, April 2014. ACM.<br />
<br />
Uesato, Jonathan, O’Donoghue, Brendan, van den Oord, Aaron, Kohli, Pushmeet. Adversarial Risk and the Dangers of Evaluating Against Weak Attacks. ''Proceedings of the 35<sup>th</sup> International Conference on Machine Learning'', Stockholm, Sweden, PMLR 80, 2018.<br />
<br />
Varshney, Kush R., and Homa Alemzadeh. On the safety of machine learning: Cyber-physical systems, decision sciences, and data products. ''Big Data'', 5(3):246–255, 2017.<br />
<br />
Webster, M., Wester, D.G., Araiza-Illan, D., Dixon, C., Eder, K., Fisher, M., Pipe, A.G. A corroborative approach to verification and validation of human-robot teams. ''J. Robotics Research'' 39(1) (2020).<br />
<br />
Xie, Xiaoyuan, J.W.K. Ho, C. Murphy, G. Kaiser, B. Xu, and T.Y. Chen. 2011. “Testing and Validating Machine Learning Classifiers by Metamorphic Testing,” ''Journal of Software Testing'', April 1, 84(4): 544-558, doi:10.1016/j.jss.2010.11.920.<br />
<br />
Zhang, J., Li, J. Testing and verification of neural-network-based safety-critical control software: A systematic literature review. ''Information and Software Technology'' 123, 106296 (2020).<br />
<br />
Zhang, J.M., Harman, M., Ma, L., Liu, Y. Machine learning testing: Survey, landscapes and horizons. ''IEEE Transactions on Software Engineering''. 2020, doi: 10.1109/TSE.2019.2962027.<br />
<br />
===Primary References===<br />
<br />
Belani, Hrvoje, Marin Vuković, and Željka Car. Requirements Engineering Challenges in Building AI-Based Complex Systems. 2019. IEEE 27<sup>th</sup> International Requirements Engineering Conference Workshops (REW).<br />
<br />
Dutta, S., Jha, S., Sankaranarayanan, S., Tiwari, A. 2018. Output range analysis for deep feedforward neural networks. In: NASA Formal Methods. pp. 121-138.<br />
<br />
Gopinath, D., G. Katz, C. Pāsāreanu, and C. Barrett. 2018. DeepSafe: A Data-Driven Approach for Assessing Robustness of Neural Networks. In: ''ATVA''.<br />
<br />
Huang, X., M. Kwiatkowska, S. Wang and M. Wu. 2017. Safety Verification of Deep Neural Networks. Computer Aided Verification.<br />
<br />
Jha, S., V. Raman, A. Pinto, T. Sahai, and M. Francis. 2017. On Learning Sparse Boolean Formulae for Explaining AI Decisions, ''NASA Formal Methods''.<br />
<br />
Katz, G., C. Barrett, D. Dill, K. Julian, M. Kochenderfer. 2017. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks, <nowiki>https://arxiv.org/abs/1702.01135</nowiki>.<br />
<br />
Leofante, F., N. Narodytska, L. Pulina, A. Tacchella. 2018. Automated Verification of Neural Networks: Advances, Challenges and Perspectives, <nowiki>https://arxiv.org/abs/1805.09938</nowiki> Marijan, Dusica and Arnaud Gotlieb. Software Testing for Machine Learning. The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) (2020)<br />
<br />
Mirman, M., T. Gehr, and M. Vechev. 2018. Differentiable Abstract Interpretation for Provably Robust Neural Networks. ''International Conference on Machine Learning''.<br />
<br />
Pullum, Laura L., Brian Taylor, and Marjorie Darrah, ''Guidance for the Verification and Validation of Neural Networks'', IEEE Computer Society Press (Wiley), 2007.<br />
<br />
Seshia, Sanjit A., Dorsa Sadigh, and S. Shankar Sastry. Towards Verified Artificial Intelligence. arXiv:1606.08514v4 [cs.AI] 23 Jul 2020.<br />
<br />
Taylor, Brian, ed. ''Methods and Procedures for the Verification and Validation of Artificial Neural Networks'', Springer-Verlag, 2005.<br />
<br />
Xiang, W., P. Musau, A. Wild, D.M. Lopez, N. Hamilton, X. Yang, J. Rosenfeld, and T. Johnson. 2018. Verification for Machine Learning, Autonomy, and Neural Networks Survey. <nowiki>https://arxiv.org/abs/1810.01989</nowiki><br />
<br />
Zhang, J., Li, J. Testing and verification of neural-network-based safety-critical control software: A systematic literature review. ''Information and Software Technology'' 123, 106296 (2020).<br />
<br />
===Additional References===<br />
Jha, Sumit Kumar, Susmit Jha, Rickard Ewetz, Sunny Raj, Alvaro Velasquez, Laura L. Pullum, and Ananthram Swami. An Extension of Fano’s Inequality for Characterizing Model Susceptibility to Membership Inference Attacks. arXiv:2009.08097v1 [cs.LG] 17 Sep 2020.<br />
<br />
Sunny Raj, Mesut Ozdag, Steven Fernandes, Sumit Kumar Jha, Laura Pullum, “On the Susceptibility of Deep Neural Networks to Natural Perturbations,” ''AI Safety 2019'' (held in conjunction with IJCAI 2019 - International Joint Conference on Artificial Intelligence), Macao, China, August 2019.<br />
<br />
Ak, R., R. Ghosh, G. Shao, H. Reed, Y.-T. Lee, L.L. Pullum. “Verification-Validation and Uncertainty Quantification Methods for Data-Driven Models in Advanced Manufacturing,” ''ASME Verification and Validation Symposium'', Minneapolis, MN, 2018.<br />
<br />
Pullum, L.L., C.A. Steed, S.K. Jha, and A. Ramanathan. “Mathematically Rigorous Verification and Validation of Scientific Machine Learning,” ''DOE Scientific Machine Learning Workshop'', Bethesda, MD, Jan/Feb 2018.<br />
<br />
Ramanathan, A., L.L. Pullum, Zubir Husein, Sunny Raj, Neslisah Totosdagli, Sumanta Pattanaik, and S.K. Jha. 2017. “Adversarial attacks on computer vision algorithms using natural perturbations.” In ''2017 10th International Conference on Contemporary Computing (IC3)''. Noida, India. August 2017.<br />
<br />
Raj, S., L.L. Pullum, A. Ramanathan, and S.K. Jha. 2017. “Work in Progress: Testing Autonomous cyber-physical systems using fuzzing features derived from convolutional neural networks.” In ''ACM SIGBED International Conference on Embedded Software'' (EMSOFT). Seoul, South Korea. October 2017.<br />
<br />
Raj, S., L.L. Pullum, A. Ramanathan, and S.K. Jha, “SATYA: Defending against Adversarial Attacks using Statistical Hypothesis Testing,” in ''10th International Symposium on Foundations and Practice of Security'' (FPS 2017), Nancy, France. (Best Paper Award), 2017.<br />
<br />
Ramanathan, A., Pullum, L.L., S. Jha, et al. “Integrating Symbolic and Statistical Methods for Testing Intelligent Systems: Applications to Machine Learning and Computer Vision.” ''IEEE Design, Automation & Test in Europe''(DATE), 2016.<br />
<br />
Pullum, L.L., C. Rouff, R. Buskens, X. Cui, E. Vassiv, and M. Hinchey, “Verification of Adaptive Systems,” ''AIAA Infotech@Aerospace'' 2012, April 2012. <br />
<br />
Pullum, L.L., and C. Symons, “Failure Analysis of a Complex Learning Framework Incorporating Multi-Modal and Semi-Supervised Learning,” In ''IEEE Pacific Rim International Symposium on Dependable Computing''(PRDC 2011), 308-313, 2011. <br />
<br />
Haglich, P., C. Rouff, and L.L. Pullum, “Detecting Emergent Behaviors with Semi-Boolean Algebra,” ''Proceedings of AIAA Infotech @ Aerospace'', 2010. <br />
<br />
Pullum, L.L., Marjorie A. Darrah, and Brian J. Taylor, “Independent Verification and Validation of Neural Networks – Developing Practitioner Assistance,” ''Software Tech News'', July 2004.<br />
----<br />
<br />
<center>[[Socio-technical Systems|< Previous Article]] | [[Emerging Topics|Parent Article]] | [[Transitioning Systems Engineering to a MOdel-based Discipline|Next Article >]]</center><br />
<br />
<center>'''SEBoK v. 2.3, released 30 October 2020'''</center><br />
<br />
[[Category: Part 8]]<br />
[[Category:Topic]]<br />
[[Category:Emerging Topics]]</div>Hlehttps://sandbox.sebokwiki.org/index.php?title=Emerging_Topics&diff=60709Emerging Topics2021-04-20T14:29:30Z<p>Hle: Added links to Topics in Part 8</p>
<hr />
<div>'''''Lead Author:''' Robert Cloutier''<br />
-----<br />
<br />
The Emerging Topics section is intended to introduce and inform the reader on significant and rapidly emerging needs and trends in practicing systems engineering within the community. It is not intended to be all-inclusive. Instead, those topics that have a high probability of significantly impacting the practice of systems engineering, as determined by the SEBoK editorial board, are covered. If the reader has recommendations of emerging topics that should be covered, please send an email to SEBoK@incose.org, or leave a comment in the comment feature at the bottom of this page.<br />
<br />
== Introduction to Systems Engineering Transformation ==<br />
The knowledge covered in this KA reflects the transformation and continued evolution of SE, which are formed by the current and future challenges (see [[Systems Engineering: Historic and Future Challenges]]). This notion of SE transformation and the other areas of knowledge which it includes are discussed briefly below.<br />
<br />
The INCOSE Systems Engineering Vision 2025 (INCOSE 2014) describes the global context for SE, the current state of SE practice and the possible future state of SE. It describes a number of ways in which SE continues to evolve to meet modern system challenges. These are summarized briefly below. <br />
<br />
Systems engineering has evolved from a combination of practices used in a number of related industries (particularly aerospace and defense). These have been used as the basis for a standardized approach to the life cycle of any complex system (see [[Systems Engineering and Management]]). Hence, SE practices are still largely based on heuristics. Efforts are under-way to evolve a theoretical foundation for systems engineering (see [[Foundations of Systems Engineering]]) considering foundational knowledge from a variety of sources. <br />
<br />
Systems engineering continues to evolve in response to a long history of increasing system [[Complexity (glossary)|'''complexity''']]. Much of this evolution is in the models and tools focused on specific aspects of SE, such as understanding stakeholder needs, representing system architectures or modeling specific system properties. The integration across disciplines, phases of development, and projects, as well as between technologies and humans, continues to represent a key systems engineering challenge. More recently, the rise of Artificial Intelligence (AI) introduces unprecedented challenges in verification and validation of AI-infused systems, but also opens up new opportunities to implement AI methodologies in the design of systems. <br />
<br />
Systems engineering is gaining recognition across industries, academia and governments. However, SE practice varies across industries, organizations, and system types. Cross fertilization of systems engineering practices across industries has begun slowly but surely; however, the global need for systems capabilities has outpaced the progress in systems engineering. <br />
<br />
INCOSE Vision 2025 concludes that SE is poised to play a major role in some of the global challenges of the 21st century, that it has already begun to change to meet these challenges and that it needs to undergo a more significant '''transformation''' to fully meet these challenges. The following bullet points are taken from the summary section of Vision 2025 and define the attributes of a transformed SE discipline in the future:<br />
* Relevant to a broad range of application domains, well beyond its traditional roots in aerospace and defense, to meeting society’s growing quest for sustainable system solutions to providing fundamental needs, in the globally competitive environment.<br />
* Applied more widely to assessments of socio-physical systems in support of policy decisions and other forms of remediation.<br />
* Comprehensively integrating multiple market, social and environmental stakeholder demands against “end-to-end” life-cycle considerations and long-term risks.<br />
* A key integrating role to support collaboration that spans diverse organizational and regional boundaries, and a broad range of disciplines.<br />
* Supported by a more encompassing foundation of theory and sophisticated model-based methods and tools allowing a better understanding of increasingly complex systems and decisions in the face of uncertainty.<br />
* Enhanced by an educational infrastructure that stresses systems thinking and systems analysis at all learning phases.<br />
* Practiced by a growing cadre of professionals who possess not only technical acumen in their domain of application, but who also have mastery of the next generation of tools and methods necessary for the systems and integration challenges of the times.<br />
Some of these future directions of SE are covered in the SEBoK. Others need to be introduced and fully integrated into the SE knowledge areas as they evolve. This KA will be used to provide an overview of these transforming aspects of SE as they emerge. This transformational knowledge will be integrated into all aspects of the SEBoK as it matures.<br />
<br />
==Topics in Part 8==<br />
<br />
*[[Transitioning Systems Engineering to a Model-based Discipline]]<br />
*[[Model-Based Systems Engineering Adoption Trends 2009-2018]]<br />
*[[Digital Engineering]]<br />
*[[Set-Based Design]]<br />
*[[Socio-technical Systems]]<br />
*[[Systems Engineering and Artificial Intelligence]]<br />
==References==<br />
===Works Cited===<br />
None.<br />
<br />
===Additional References===<br />
None.<br />
<br />
<center>[[Emerging Knowledge|< Previous Article]] | [[Emerging Knowledge|Parent Article]] | [[Introduction to SE Transformation|Next Article >]]</center><br />
<br />
<center>'''SEBoK v. 2.3, released 30 October 2020'''</center><br />
<br />
[[Category: Part 8]]<br />
[[Category:Knowledge Area]]</div>Hlehttps://sandbox.sebokwiki.org/index.php?title=Emerging_Knowledge&diff=60708Emerging Knowledge2021-04-20T14:27:15Z<p>Hle: Add links to Emerging Topics and Emerging Research pages</p>
<hr />
<div>-----<br />
'''''Lead Author:''' Robert Cloutier''<br />
-----<br />
<br />
Like other portions of the SEBoK, the notion and content of Part 8 is evolving. The Knowledge Areas (KAs) or Sections in Part 8 are based on the topics or themes that see Emerging Knowledge. For each KA, the Emerging Knowledge consists of two aspects: Emerging Topics and Emerging Research. <br />
<br />
[[File:SEBoK_Context_Diagram_Inner_P8_Ifezue_Obiako.png|centre|thumb|500x500px|'''Figure 1. SEBoK Part 8 in context (SEBoK Original).''' For more detail see [[Structure of the SEBoK]]]]<br />
<br />
==Scope and Purpose== <br />
While the practice and need for systems engineering began appearing in journals from 1950 onward, the practice currently seems to be gaining momentum in most engineering and even non-engineering circles.<br />
<br />
The classically trained systems engineers of the 1970s and even 1980s are faced with a C note shift in thinking brought on by the rapid advance of the software centricity of our systems, cybersecurity, agent-based, object-oriented, and model-based practices. These emerging practices bring their own methods and tools. Hall (1962, p. 5) may have been prescient when he wrote “It is hard to say whether increasing complexity is the cause or the effect of man's effort to cope with his expanding environment. In either case a central feature of the trend has been the development of large and very complex systems which tie together modern society. These systems include abstract or non-physical systems, such as government and the economic system.”<br />
<br />
These changes and the rate of change are causing systems engineering to evolve. Some of the practices may not even be recognizable to classically trained systems engineers. This Part of the SEBoK is intended to introduce some of the more significant emerging changes to systems engineering.<br />
As topics discussed in this Part evolve and become mainstream, they will be moved into the appropriate Part of the SEBoK.<br />
<br />
SoSE provides examples in recent times of emerging topic from SE community, generated emerging research, ultimately resulting in a foundational body of knowledge that continues to expand. A recent article describing this evolution from emerging topic to solution is now resident in Part 4 - [[Socio-Technical Features of Systems of Systems]].<br />
<br />
==Overview of Emerging Topics==<br />
''See further: [[Emerging Topics]]''<br />
<br />
The Emerging Topics section is meant to inform the reader on the more significant and emerging changes to the practice of systems engineering. Examples of these emerging topics include:<br />
<br />
* What is the potential to change systems engineering processes or the ways in which we perform systems engineering?<br />
*How will the development of artificial intelligence impact systems engineering?<br />
**Will AI change the way we think of systems architecture? <br />
**How will we perform V&V of an AI system? <br />
*How will the push towards vertically integrated digital engineering influence systems engineering?<br />
*How are social features becoming more tightly connected to technical features of systems, and how is the modeling of socio-technical systems infusing into practice?<br />
<br />
==Overview of Emerging Research==<br />
''See further: [[Emerging Research]]'' <br />
<br />
As these emerging topics gain visibility, researchers will begin to investigate them. Corporate R&D may do early work, but academia and government will formalize this research. The Emerging Research section is a place to gather the references to this disparate work into a single repository to better inform systems engineers working on related topics. The references are collected from the following sources: <br />
* PhD dissertations<br />
* INCOSE publications and events <br />
* IEEE publications and events<br />
* Research funded by National Science Foundation (NSF) – Engineering Design and Systems Engineering (EDSE)<br />
* Research funded by Systems Engineering Research Center (SERC)<br />
<br />
==References==<br />
===Works Cited===<br />
Hall, Arthur D. (1962). ''A Methodology for Systems Engineering.'' New York, NY, USA: Van Nostrand.<br />
<br />
===Additional References===<br />
Engstrom, E.W. (1957). "Systems engineering: A growing concept," in Electrical Engineering, vol. 76, no. 2, pp. 113-116, Feb. 1957, doi: 10.1109/EE.1957.6442968.<br />
<br />
Goode, H. Herbert., Machol, R. Engel. (1957). ''System Engineering: An Introduction to the Design of Large-Scale Systems.'' New York, NY, USA: McGraw-Hill.<br />
<br />
Kelly, Mervin J. (1950). “The Bell Telephone Laboratories—An example of an institute of creative technology”. Proceedings of the Royal Society B. Vol. 137, Issue 889. https://doi.org/10.1098/rspb.1950.0050.<br />
<br />
<center>[[Singapore Water Management|< Previous Article]] | [[SEBoK Table of Contents|Parent Article]] | [[Emerging Topics|Next Article >]]</center><br />
<br />
<center>'''SEBoK v. 2.3, released 30 October 2020'''</center><br />
<br />
[[Category: Part 8]]<br />
[[Category:Part]]</div>Hlehttps://sandbox.sebokwiki.org/index.php?title=Socio-technical_Systems&diff=60707Socio-technical Systems2021-04-20T13:14:34Z<p>Hle: Content of Socio-Technical Systems added</p>
<hr />
<div>Though there are a few specific definitions, there are many ways in which the term “socio-technical system” is used depending on the specific engineering/scientific domain. There are also different approaches for considering socio-technical systems depending on the life cycle stage and the specific systems engineering challenge.<br />
<br />
== The Concept and Theory ==<br />
The concept of a socio-technical system describes the interrelationship between humans and machines, and the motivation behind developing research on socio-technical systems was to cope with theoretical and practical work environment problems in industry (Ropohl, 1999). <br />
<br />
Socio-technical systems theory has been developing over the past 60 years predominately focusing on new technology and work design (Davis et al., 2014). This theory has developed into socio-technical systems thinking, and research has concentrated in several key areas:<br />
* Human factors and ergonomics (Carayon, 2006) <br />
* Organizational design (Cherns, 1976) <br />
* System design (Clegg, 2000; van Eijnatten, 1998)<br />
* Information systems (Mumford, 2006)<br />
<br />
== A Design Approach ==<br />
As a design approach —socio-technical systems design (STSD)—socio-technical systems bring human, social, organizational and technical elements in the design of organizational systems (Baxter and Sommerville, 2011). While Baxter and Sommerville (2011) refer to computer-based systems in their definition of socio-technical systems thinking as a design approach, the generic term “technical system” is also applicable: “The underlying premise of socio-technical systems thinking is that system design should be a process that takes into account both social and technical factors that influence the functionality and usage of computer-based systems” (p.4).<br />
<br />
== Systems Engineering Context ==<br />
In a systems engineering context, it has been argued that all systems are socio-technical systems (Palmer, et al., 2019). However, socio-technical systems in a systems engineering context is not well defined though the topic has gained traction in recent years (Donaldson, 2017; Broniatowski, 2018). There are examples in systems engineering literature, where the term socio-technical systems is used to refer to a system where social and technical elements are relevant. These include studies of agent-based modeling of socio-technical systems (Heydari and Pennock, 2018), insurance systems as socio-critical systems (Yasui, 2011) and interdisciplinary systems engineering approaches to influence enterprise systems (Pennock and Rouse, 2016; Wang et al., 2018). <br />
<br />
Based on the work that the systems engineering community has produced thus far, the working definition of the term socio-technical systems in a systems engineering context is simply:<br />
<br />
''Socio-technical systems: Systems operating at the intersection of social and technical systems'' (Kroes et al., 2006)''.''<br />
<br />
== Modeling Sociotechnical Systems ==<br />
There is no “state of the practice” for how to model sociotechnical systems. There are, however, a few examples in systems engineering literature of how systems engineers could analyze these types of systems. Outside systems engineering/engineering literature, there is an ever-increasing number of examples of social system models. The modeling techniques found in these examples can be adapted to evaluate sociotechnical systems in a systems engineering context. Many of these are system dynamics models, and there is a journal dedicated to social system analysis, called the Journal for Artificial Societies and Social Simulation (JASS), which focuses on agent-based modeling. <br />
<br />
1) Qualitative Modeling<br />
* Insurance systems as socio-critical systems (Yasui, 2011)<br />
Yasui (2011) provides a new methodology to accommodate stakeholder goals in social system failures. This new methodology is a “soft” systems approach that brings together the Holon concept by Checkland and Scholes (1990) and the Vee Model.<br />
<br />
2) Agent-Based Modeling of Sociotechnical Systems in Systems Engineering<br />
* Agent-based modeling of sociotechnical systems (Heydari and Pennock, 2018)<br />
Heydari and Pennock (2018) illustrate how to support the design and governance of sociotechnical systems with agent-based modeling (ABM). Critically, they outline the difference between how ABM is used in physical, natural and social applications versus sociotechnical applications. <br />
* Interdisciplinary systems engineering approaches to influence enterprise systems (Pennock and Rouse, 2016) <br />
Pennock and Rouse (2016) not only provide how to define an enterprise as a system, but they also illustrate this with several ABM examples. They also highlight that when modeling sociotechnical systems versus traditional engineering systems, it is important to focused less on “control” and more on “influence.” <br />
<br />
3) Economic modeling <br />
* Social System Modeling Challenges (Wang et al., 2018)<br />
In their book, Social Systems Engineering, Wang et al. (2018) provide an overview of not only modeling and its challenges in evaluating social systems, but they also give insight into how social system modeling is approached in economics. <br />
<br />
4) System Dynamics Modeling of Social Systems for Adaptation in an SE Sociotechnical Context<br />
* Social policy (Palmer, 2017)<br />
Palmer (2017) provides an overview of social systems in a systems engineering context, and uses system dynamics modeling of pension and sick leave policy systems to illustrate how to use systems engineering methods for social policy.<br />
* Social Systems Engineering (García‐Díaz and Olaya, 2018)<br />
García‐Díaz and Olaya (2018) give not only a thorough overview in their book (called Social Systems Engineering) of social systems and various qualitative and quantitative modeling types, but they also highlight participatory system dynamics modeling (stakeholder-led system design).<br />
* Health care (Homer and Hirsch, 2006)<br />
As there is increasing attention in the systems engineering community towards health care technology, Homer and Hirsch’s (2006) paper on system dynamics modeling of public health gives a basis for how to model social systems in this domain. For example, chronic disease prevention, disease outcomes, health and risk behaviors, environmental factors, and health-related resources and delivery systems. <br />
<br />
==References==<br />
<br />
===Works Cited===<br />
<br />
Baxter, G. and Sommerville, I., 2011. Socio-technical systems: From design methods to systems engineering. Interacting with computers, 23(1), pp.4-17.<br />
<br />
Broniatowski, DA, 2018, ‘Building the tower without climbing it: Progress in engineering systems’, Systems Engineering, 21 (3), 259-81.<br />
<br />
Carayon, P., 2006. ‘Human factors of complex sociotechnical systems.’ Applied ergonomics, 37(4), pp.525-535.<br />
<br />
Checkland, P. and Scholes, J. 1990. ‘Soft systems methodology in action.’ Wiley: UK.<br />
<br />
Cherns, A., 1976. The principles of sociotechnical design. Human relations, 29(8), pp.783-792.<br />
<br />
Clegg, C.W., 2000. Sociotechnical principles for system design. Applied ergonomics, 31(5), pp.463-477.<br />
<br />
Davis, M.C., Challenger, R., Jayewardene, D.N. and Clegg, C.W., 2014. Advancing socio-technical systems thinking: A call for bravery. Applied ergonomics, 45(2), pp.171-180.<br />
<br />
Donaldson, W, 2017. ‘In Praise of the “Ologies”: A Discussion of and Framework for Using Soft Skills to Sense and Influence Emergent Behaviors in Sociotechnical Systems’, Systems Engineering, 20 (5), 467-78.<br />
<br />
Heydari, B and Pennock, MJ, 2018, ‘Guiding the behavior of sociotechnical systems: The role of agent‐based modeling’, Systems Engineering, 21 (3),210-26.<br />
<br />
Homer, JB and Hirsch, GB, 2006, ‘System dynamics modeling for public health: background and opportunities’, American journal of public health, 96 (3), 452-458.<br />
<br />
Kroes, P, Franssen, M, Poel, IVD and Ottens M, 2006, ‘Treating socio‐technical systems as engineering systems: some conceptual problems’, Systems Research and Behavioral Science, 23 (6), 803-814.<br />
<br />
Palmer, E, 2017, ‘Systems Engineering Applied to Evaluate Social Systems: Analyzing Systemic Challenges to the Norwegian Welfare State.’ University of Bergen: Norway.<br />
<br />
Palmer, E, Presland, I, Rhodes, D, Olaya, C, Haskins, C, Glazner, C, 2019, ‘Social Systems-Where Are We and Where Do We Dare to Go?’ Panel Discussion. 29th Annual INCOSE Symposium, Orlando, Florida<br />
<br />
Pennock, MJ and Rouse WB, 2016, ‘The epistemology of enterprises’, Systems Engineering, 19 (1), 24-43.<br />
<br />
Ropohl, G., 1999. Philosophy of socio-technical systems. Society for Philosophy and Technology Quarterly Electronic Journal, 4(3), pp.186-194.<br />
<br />
van Eijnatten, F.M., 1998. Developments in socio-technical systems design (STSD). P. J. Drenth, H. Thierry, & CJ de Wolff, Handbook of Work and Organizational Psychology, 2, pp.61-80.<br />
<br />
Wang, H, Li, S and Wang, Q, 2018. ‘Introduction to Social Systems Engineering.’ Springer: US.<br />
<br />
Yasui, T, 2011, ‘A new systems engineering approach for a Socio‐Critical System: A case study of claims‐payment failures of Japan's insurance industry,’ Systems Engineering, 14 (4), 349-63<br />
<br />
===Primary References===<br />
<br />
<br />
===Additional References===<br />
<br />
----<br />
<br />
<center>[[Emerging Topics|< Previous Article]] | [[Emerging Topics|Parent Article]] | [[Systems Engineering and Artificial Intelligence|Next Article >]]</center><br />
<br />
<center>'''SEBoK v. 2.3, released 30 October 2020'''</center><br />
<br />
[[Category: Part 8]]<br />
[[Category:Topic]]<br />
[[Category:Emerging Topics]]</div>Hlehttps://sandbox.sebokwiki.org/index.php?title=Emerging_Research&diff=60550Emerging Research2021-04-01T04:30:31Z<p>Hle: First intro sentence</p>
<hr />
<div>-----<br />
'''''Lead Authors:''' Robert Cloutier, Arthur Pyster''<br />
-----<br />
<br />
The Emerging Research topic under the SEBoK Emerging Knowledge is a place to showcase some of the systems engineering research published in the past 3-5 years.<br />
<br />
==Doctoral Dissertations==<br />
Doctoral level systems engineering research has taken root over the last two decades. Additionally, many institutions have either an Industrial Engineering or Systems Engineering Master’s program. This has enabled new and interesting research to be conducted. Here you will find bibliographic citations and summaries for recently defended research.<br />
<br />
===Towards Early Lifecycle Prediction of System Reliability===<br />
Salter, C. “Towards early lifecycle prediction of system reliability,” Ph.D. dissertation University of South Alabama, Mobile, Alabama, July 2018. Available: [https://order.proquest.com/OA_HTML/pqdtibeCAcdLogin.jsp;jsessionid=e0c74b22f5dff64bf4a20c1deef606a0397a02e9aa59ec701b81e5d7cde90387.e34PbxmRc3qPbO0Lbx4Nc3yMbxiNe0?ref=https%3A%2F%2Forder.proquest.com%2FOA_HTML%2FpqdtibeCCtpItmDspRte.jsp%3Fdlnow%3D1%26item%3D10840641%26rpath%3Dhttps%253A%252F%252Fsearch.proquest.com%252Fpqdtglobal%252Fredirectfor%253Faccountid%253D14541%26track%3D1SRCH&sitex=10020:22372:US&sitex=10020:22372:US ProQuest Store]<br />
<br />
Reliability is traditionally defined as “the probability that an item will perform a required function without failure under stated conditions for a stated period of time” (O'Connor, 2012). This definition is applicable to all levels of a system, from the smallest part to the system as a whole. Predicting reliability requires extensive knowledge of the system of interest, thus making prediction difficult and complex. This problem is further complicated by the desire to predict system reliability early in the acquisition lifecycle. This work set out to develop a model for the prediction of system reliability early in the system lifecycle. The model utilizes eight factors: number of system requirements, number of major interfaces, number of operational environments, requirements understanding, technology maturity, manufacturability, company experience, and performance convergence. These factors come together to form a model much like the software engineering and systems engineering models COCOMO and COSYSMO. This work provides the United States Department of Defense a capability that previously did not exist: the estimation of system reliability early in the system lifecycle. The research demonstrates that information available during early system development may be used to predict system reliability. Through testing, the author found that a model of this type could provide reliability predictions for military ground vehicles within 25% of their actual recorded reliability values. <br />
<br />
===Toward the Evolution of Information Digital Ecosystems===<br />
Lippert, K. “Toward the evolution of information digital ecosystems,” Ph.D dissertation, University of South Alabama, Mobile, Alabama, May 2018. Available: [https://order.proquest.com/OA_HTML/pqdtibeCAcdLogin.jsp?ref=https%3A%2F%2Forder.proquest.com%2FOA_HTML%2FpqdtibeCCtpItmDspRte.jsp%3Fdlnow%3D1%26item%3D10790760%26rpath%3Dhttps%253A%252F%252Fsearch.proquest.com%252Fpqdtglobal%252Fredirectfor%253Faccountid%253D14541%26track%3D1SRCH&sitex=10020:22372:US&sitex=10020:22372:US ProQuest Store].<br />
<br />
Digital ecosystems are the next generation of Internet and network applications, promising a whole new world of distributed and open systems that can interact, self-organize, evolve, and adapt. These ecosystems transcend traditional collaborative environments, such as client-server, peer-to-peer, or hybrid models (e.g., web services) to become a self-organized, interactive environment. The complexity of these digital ecosystems will encourage evolution through adaptive processes and selective pressures of one member on another to satisfy interaction, adaptive organization, and, incidentally, human curiosity. This work addresses one of the essential parts of the digital ecosystem – the information architecture. The research, inspired by systems thinking influenced by both biological models and science fiction, applies the TRIZ method to the contradictions raised by evolving data. This inspired the application of patterns and metaphor as a means for coping with the evolution of the ecosystem. The metaphor is explored as a model of representation of rapidly changing information through a demonstration of an adaptive digital ecosystem. The combination of this type of data representation with dynamic programming and adaptive interfaces will enable the development of the various components required by a true digital ecosystem.<br />
<br />
===Cybersecurity Decision Patterns as Adaptive Knowledge Encoding in Cybersecurity Operations===<br />
Willett, K. “Cybersecurity decision patterns as adaptive knowledge encoding in cybersecurity operations”, Ph.D. dissertation, Stevens Institute of Technology, Hoboken, NJ, July 2016. Available: https://pqdtopen.proquest.com/doc/1875237837.html?FMT=ABS.<br />
<br />
Cyberspace adversaries perform successful exploits using automated adaptable tools. Cyberspace defense is too slow because existing response solutions require humans in-the-loop across sensing, sense-making, decision-making, acting, command, and control of security operations (Dōne et al. 2016). Security automation is necessary to provide for cyber defense dynamic adaptability in response to an agile adversary with intelligence and intent who adapts quickly to exploit new vulnerabilities and new safeguards. The rules for machine-encoding security automation must come from people; from their knowledge validated through their real-world experience. Cybersecurity Decision Patterns as Adaptive Knowledge Encoding in Cybersecurity Operations introduces cybersecurity decision patterns (CDPs) as formal knowledge representation to capture, codify, and share knowledge to introduce and enhance security automation with the intent to improve cybersecurity operations efficiency for processing anomalies.<br />
<br />
== INCOSE & IEEE Periodicals and Events ==<br />
Every year, the International Council on Systems Engineering (INCOSE) holds one International Workshop and one International Symposium, as well as regular meetings of various working groups, to encourage discussions of emerging needs and sharing of experience within Systems Engineering community. All papers and presentations from these events are available for free for INCOSE members, or with a fee for non-members via Wiley. The library can be access here: https://www.incose.org/products-and-publications/papers-presentations-library#<br />
<br />
Additionally, INCOSE also publish periodicals, which include: Systems Engineering (SE Journal), INSIGHT (magazine), and INCOSE Members Newsletter. These periodicals are available as PDF, free for INCOSE members and with a fee for non-members, or as hard copies. More information can be found here: https://www.incose.org/products-and-publications/periodicals<br />
<br />
The Institute of Electrical and Electronics Engineers (IEEE) Systems Council also holds multiple annual conferences, such as the International Systems Conference (SysCon), on systems engineering, resulting in a large pool of publications. These publications can be found via: https://ieeesystemscouncil.org/publications<br />
<br />
== NSF- and SERC-funded Research ==<br />
The National Science Foundation (NSF), Division of Civil, Mechanical, and Manufacturing Innovation (CMMI) has been funding research in academia on systems engineering under Engineering Design and Systems Engineering (EDSE) program. According to [https://www.nsf.gov/funding/pgm_summ.jsp?pims_id=505478&org=ENG&from=home NSF-EDSE website], the program "seeks proposals leading to improved understanding about how processes, organizational structure, social interactions, strategic decision making, and other factors impact success in the planning and execution of engineering design and systems engineering projects". Research under this program can be found via Award Search feature on NSF website: https://www.nsf.gov/awardsearch/advancedSearch.jsp (Enter "CMMI" for NSF Organization and "EDSE" for Program). <br />
<br />
The Systems Engineering Research Center (SERC) is a University-Affiliated Research Center of the US Department of Defense, consisting of 22 collaborator universities in the US and funding research on different aspects of Systems Engineering, including Enterprises and System of Systems, Trusted Systems, Systems Engineering and Systems Management Transformation. More information can be found here: https://sercuarc.org/serc-programs-projects/esos/<br />
<br />
<center>[[Set-Based Design|< Previous Article]] | [[Emerging Knowledge|Parent Article]] | Last Article ([[SEBoK Table of Contents|Return to TOC]])</center><br />
<br />
<center>'''SEBoK v. 2.3, released 30 October 2020'''</center><br />
<br />
[[Category:Part 8]]<br />
[[Category:Topic]]</div>Hlehttps://sandbox.sebokwiki.org/index.php?title=Emerging_Research&diff=60549Emerging Research2021-04-01T04:29:27Z<p>Hle: Reorganized article to include other sources of research (as 2 separate sections following Doctoral Dissertations)</p>
<hr />
<div>-----<br />
'''''Lead Authors:''' Robert Cloutier, Arthur Pyster''<br />
-----<br />
<br />
The Emerging Research topic under the SEBoK Emerging Topics will be a place to showcase some of the systems engineering research published in the past 3-5 years.<br />
<br />
==Doctoral Dissertations==<br />
Doctoral level systems engineering research has taken root over the last two decades. Additionally, many institutions have either an Industrial Engineering or Systems Engineering Master’s program. This has enabled new and interesting research to be conducted. Here you will find bibliographic citations and summaries for recently defended research.<br />
<br />
===Towards Early Lifecycle Prediction of System Reliability===<br />
Salter, C. “Towards early lifecycle prediction of system reliability,” Ph.D. dissertation University of South Alabama, Mobile, Alabama, July 2018. Available: [https://order.proquest.com/OA_HTML/pqdtibeCAcdLogin.jsp;jsessionid=e0c74b22f5dff64bf4a20c1deef606a0397a02e9aa59ec701b81e5d7cde90387.e34PbxmRc3qPbO0Lbx4Nc3yMbxiNe0?ref=https%3A%2F%2Forder.proquest.com%2FOA_HTML%2FpqdtibeCCtpItmDspRte.jsp%3Fdlnow%3D1%26item%3D10840641%26rpath%3Dhttps%253A%252F%252Fsearch.proquest.com%252Fpqdtglobal%252Fredirectfor%253Faccountid%253D14541%26track%3D1SRCH&sitex=10020:22372:US&sitex=10020:22372:US ProQuest Store]<br />
<br />
Reliability is traditionally defined as “the probability that an item will perform a required function without failure under stated conditions for a stated period of time” (O'Connor, 2012). This definition is applicable to all levels of a system, from the smallest part to the system as a whole. Predicting reliability requires extensive knowledge of the system of interest, thus making prediction difficult and complex. This problem is further complicated by the desire to predict system reliability early in the acquisition lifecycle. This work set out to develop a model for the prediction of system reliability early in the system lifecycle. The model utilizes eight factors: number of system requirements, number of major interfaces, number of operational environments, requirements understanding, technology maturity, manufacturability, company experience, and performance convergence. These factors come together to form a model much like the software engineering and systems engineering models COCOMO and COSYSMO. This work provides the United States Department of Defense a capability that previously did not exist: the estimation of system reliability early in the system lifecycle. The research demonstrates that information available during early system development may be used to predict system reliability. Through testing, the author found that a model of this type could provide reliability predictions for military ground vehicles within 25% of their actual recorded reliability values. <br />
<br />
===Toward the Evolution of Information Digital Ecosystems===<br />
Lippert, K. “Toward the evolution of information digital ecosystems,” Ph.D dissertation, University of South Alabama, Mobile, Alabama, May 2018. Available: [https://order.proquest.com/OA_HTML/pqdtibeCAcdLogin.jsp?ref=https%3A%2F%2Forder.proquest.com%2FOA_HTML%2FpqdtibeCCtpItmDspRte.jsp%3Fdlnow%3D1%26item%3D10790760%26rpath%3Dhttps%253A%252F%252Fsearch.proquest.com%252Fpqdtglobal%252Fredirectfor%253Faccountid%253D14541%26track%3D1SRCH&sitex=10020:22372:US&sitex=10020:22372:US ProQuest Store].<br />
<br />
Digital ecosystems are the next generation of Internet and network applications, promising a whole new world of distributed and open systems that can interact, self-organize, evolve, and adapt. These ecosystems transcend traditional collaborative environments, such as client-server, peer-to-peer, or hybrid models (e.g., web services) to become a self-organized, interactive environment. The complexity of these digital ecosystems will encourage evolution through adaptive processes and selective pressures of one member on another to satisfy interaction, adaptive organization, and, incidentally, human curiosity. This work addresses one of the essential parts of the digital ecosystem – the information architecture. The research, inspired by systems thinking influenced by both biological models and science fiction, applies the TRIZ method to the contradictions raised by evolving data. This inspired the application of patterns and metaphor as a means for coping with the evolution of the ecosystem. The metaphor is explored as a model of representation of rapidly changing information through a demonstration of an adaptive digital ecosystem. The combination of this type of data representation with dynamic programming and adaptive interfaces will enable the development of the various components required by a true digital ecosystem.<br />
<br />
===Cybersecurity Decision Patterns as Adaptive Knowledge Encoding in Cybersecurity Operations===<br />
Willett, K. “Cybersecurity decision patterns as adaptive knowledge encoding in cybersecurity operations”, Ph.D. dissertation, Stevens Institute of Technology, Hoboken, NJ, July 2016. Available: https://pqdtopen.proquest.com/doc/1875237837.html?FMT=ABS.<br />
<br />
Cyberspace adversaries perform successful exploits using automated adaptable tools. Cyberspace defense is too slow because existing response solutions require humans in-the-loop across sensing, sense-making, decision-making, acting, command, and control of security operations (Dōne et al. 2016). Security automation is necessary to provide for cyber defense dynamic adaptability in response to an agile adversary with intelligence and intent who adapts quickly to exploit new vulnerabilities and new safeguards. The rules for machine-encoding security automation must come from people; from their knowledge validated through their real-world experience. Cybersecurity Decision Patterns as Adaptive Knowledge Encoding in Cybersecurity Operations introduces cybersecurity decision patterns (CDPs) as formal knowledge representation to capture, codify, and share knowledge to introduce and enhance security automation with the intent to improve cybersecurity operations efficiency for processing anomalies.<br />
<br />
== INCOSE & IEEE Periodicals and Events ==<br />
Every year, the International Council on Systems Engineering (INCOSE) holds one International Workshop and one International Symposium, as well as regular meetings of various working groups, to encourage discussions of emerging needs and sharing of experience within Systems Engineering community. All papers and presentations from these events are available for free for INCOSE members, or with a fee for non-members via Wiley. The library can be access here: https://www.incose.org/products-and-publications/papers-presentations-library#<br />
<br />
Additionally, INCOSE also publish periodicals, which include: Systems Engineering (SE Journal), INSIGHT (magazine), and INCOSE Members Newsletter. These periodicals are available as PDF, free for INCOSE members and with a fee for non-members, or as hard copies. More information can be found here: https://www.incose.org/products-and-publications/periodicals<br />
<br />
The Institute of Electrical and Electronics Engineers (IEEE) Systems Council also holds multiple annual conferences, such as the International Systems Conference (SysCon), on systems engineering, resulting in a large pool of publications. These publications can be found via: https://ieeesystemscouncil.org/publications<br />
<br />
== NSF- and SERC-funded Research ==<br />
The National Science Foundation (NSF), Division of Civil, Mechanical, and Manufacturing Innovation (CMMI) has been funding research in academia on systems engineering under Engineering Design and Systems Engineering (EDSE) program. According to [https://www.nsf.gov/funding/pgm_summ.jsp?pims_id=505478&org=ENG&from=home NSF-EDSE website], the program "seeks proposals leading to improved understanding about how processes, organizational structure, social interactions, strategic decision making, and other factors impact success in the planning and execution of engineering design and systems engineering projects". Research under this program can be found via Award Search feature on NSF website: https://www.nsf.gov/awardsearch/advancedSearch.jsp (Enter "CMMI" for NSF Organization and "EDSE" for Program). <br />
<br />
The Systems Engineering Research Center (SERC) is a University-Affiliated Research Center of the US Department of Defense, consisting of 22 collaborator universities in the US and funding research on different aspects of Systems Engineering, including Enterprises and System of Systems, Trusted Systems, Systems Engineering and Systems Management Transformation. More information can be found here: https://sercuarc.org/serc-programs-projects/esos/<br />
<br />
<center>[[Set-Based Design|< Previous Article]] | [[Emerging Knowledge|Parent Article]] | Last Article ([[SEBoK Table of Contents|Return to TOC]])</center><br />
<br />
<center>'''SEBoK v. 2.3, released 30 October 2020'''</center><br />
<br />
[[Category:Part 8]]<br />
[[Category:Topic]]</div>Hlehttps://sandbox.sebokwiki.org/index.php?title=Emerging_Topics&diff=60548Emerging Topics2021-04-01T03:21:19Z<p>Hle: Merged Intro to SE Transformation to front page of Emerging Topics section and added Socio-Technical Systems as one of the topics</p>
<hr />
<div>'''''Lead Author:''' Robert Cloutier''<br />
-----<br />
<br />
The Emerging Topics section is intended to introduce and inform the reader on significant and rapidly emerging needs and trends in practicing systems engineering within the community. It is not intended to be all-inclusive. Instead, those topics that have a high probability of significantly impacting the practice of systems engineering, as determined by the SEBoK editorial board, are covered. If the reader has recommendations of emerging topics that should be covered, please send an email to SEBoK@incose.org, or leave a comment in the comment feature at the bottom of this page.<br />
<br />
== Introduction to Systems Engineering Transformation ==<br />
The knowledge covered in this KA reflects the transformation and continued evolution of SE, which are formed by the current and future challenges (see [[Systems Engineering: Historic and Future Challenges]]). This notion of SE transformation and the other areas of knowledge which it includes are discussed briefly below.<br />
<br />
The INCOSE Systems Engineering Vision 2025 (INCOSE 2014) describes the global context for SE, the current state of SE practice and the possible future state of SE. It describes a number of ways in which SE continues to evolve to meet modern system challenges. These are summarized briefly below. <br />
<br />
Systems engineering has evolved from a combination of practices used in a number of related industries (particularly aerospace and defense). These have been used as the basis for a standardized approach to the life cycle of any complex system (see [[Systems Engineering and Management]]). Hence, SE practices are still largely based on heuristics. Efforts are under-way to evolve a theoretical foundation for systems engineering (see [[Foundations of Systems Engineering]]) considering foundational knowledge from a variety of sources. <br />
<br />
Systems engineering continues to evolve in response to a long history of increasing system [[Complexity (glossary)|'''complexity''']]. Much of this evolution is in the models and tools focused on specific aspects of SE, such as understanding stakeholder needs, representing system architectures or modeling specific system properties. The integration across disciplines, phases of development, and projects, as well as between technologies and humans, continues to represent a key systems engineering challenge. More recently, the rise of Artificial Intelligence (AI) introduces unprecedented challenges in verification and validation of AI-infused systems, but also opens up new opportunities to implement AI methodologies in the design of systems. <br />
<br />
Systems engineering is gaining recognition across industries, academia and governments. However, SE practice varies across industries, organizations, and system types. Cross fertilization of systems engineering practices across industries has begun slowly but surely; however, the global need for systems capabilities has outpaced the progress in systems engineering. <br />
<br />
INCOSE Vision 2025 concludes that SE is poised to play a major role in some of the global challenges of the 21st century, that it has already begun to change to meet these challenges and that it needs to undergo a more significant '''transformation''' to fully meet these challenges. The following bullet points are taken from the summary section of Vision 2025 and define the attributes of a transformed SE discipline in the future:<br />
* Relevant to a broad range of application domains, well beyond its traditional roots in aerospace and defense, to meeting society’s growing quest for sustainable system solutions to providing fundamental needs, in the globally competitive environment.<br />
* Applied more widely to assessments of socio-physical systems in support of policy decisions and other forms of remediation.<br />
* Comprehensively integrating multiple market, social and environmental stakeholder demands against “end-to-end” life-cycle considerations and long-term risks.<br />
* A key integrating role to support collaboration that spans diverse organizational and regional boundaries, and a broad range of disciplines.<br />
* Supported by a more encompassing foundation of theory and sophisticated model-based methods and tools allowing a better understanding of increasingly complex systems and decisions in the face of uncertainty.<br />
* Enhanced by an educational infrastructure that stresses systems thinking and systems analysis at all learning phases.<br />
* Practiced by a growing cadre of professionals who possess not only technical acumen in their domain of application, but who also have mastery of the next generation of tools and methods necessary for the systems and integration challenges of the times.<br />
Some of these future directions of SE are covered in the SEBoK. Others need to be introduced and fully integrated into the SE knowledge areas as they evolve. This KA will be used to provide an overview of these transforming aspects of SE as they emerge. This transformational knowledge will be integrated into all aspects of the SEBoK as it matures.<br />
<br />
==Topics in Part 8==<br />
<br />
*[[Transitioning Systems Engineering to a Model-based Discipline]]<br />
*[[Model-Based Systems Engineering Adoption Trends 2009-2018]]<br />
*[[Digital Engineering]]<br />
*[[Set-Based Design]]<br />
*Socio-Technical Systems<br />
==References==<br />
===Works Cited===<br />
None.<br />
<br />
===Additional References===<br />
None.<br />
<br />
<center>[[Emerging Knowledge|< Previous Article]] | [[Emerging Knowledge|Parent Article]] | [[Introduction to SE Transformation|Next Article >]]</center><br />
<br />
<center>'''SEBoK v. 2.3, released 30 October 2020'''</center><br />
<br />
[[Category: Part 8]]<br />
[[Category:Knowledge Area]]</div>Hlehttps://sandbox.sebokwiki.org/index.php?title=Emerging_Knowledge&diff=60547Emerging Knowledge2021-04-01T03:04:43Z<p>Hle: Reorganized subsections and added list of sources for Emerging Research.</p>
<hr />
<div>-----<br />
'''''Lead Author:''' Robert Cloutier''<br />
-----<br />
<br />
Like other portions of the SEBoK, the notion and content of Part 8 is evolving. The Knowledge Areas (KAs) or Sections in Part 8 are based on the topics or themes that see Emerging Knowledge. For each KA, the Emerging Knowledge consists of two aspects: Emerging Topics and Emerging Research. <br />
<br />
[[File:SEBoK_Context_Diagram_Inner_P8_Ifezue_Obiako.png|centre|thumb|500x500px|'''Figure 1. SEBoK Part 8 in context (SEBoK Original).''' For more detail see [[Structure of the SEBoK]]]]<br />
<br />
==Scope and Purpose== <br />
While the practice and need for systems engineering began appearing in journals from 1950 onward, the practice currently seems to be gaining momentum in most engineering and even non-engineering circles.<br />
<br />
The classically trained systems engineers of the 1970s and even 1980s are faced with a C note shift in thinking brought on by the rapid advance of the software centricity of our systems, cybersecurity, agent-based, object-oriented, and model-based practices. These emerging practices bring their own methods and tools. Hall (1962, p. 5) may have been prescient when he wrote “It is hard to say whether increasing complexity is the cause or the effect of man's effort to cope with his expanding environment. In either case a central feature of the trend has been the development of large and very complex systems which tie together modern society. These systems include abstract or non-physical systems, such as government and the economic system.”<br />
<br />
These changes and the rate of change are causing systems engineering to evolve. Some of the practices may not even be recognizable to classically trained systems engineers. This Part of the SEBoK is intended to introduce some of the more significant emerging changes to systems engineering.<br />
As topics discussed in this Part evolve and become mainstream, they will be moved into the appropriate Part of the SEBoK.<br />
<br />
SoSE provides examples in recent times of emerging topic from SE community, generated emerging research, ultimately resulting in a foundational body of knowledge that continues to expand. A recent article describing this evolution from emerging topic to solution is now resident in Part 4 - [[Socio-Technical Features of Systems of Systems]].<br />
<br />
==Overview of Emerging Topics==<br />
The Emerging Topics section is meant to inform the reader on the more significant and emerging changes to the practice of systems engineering. Examples of these emerging topics include:<br />
<br />
* What is the potential to change systems engineering processes or the ways in which we perform systems engineering?<br />
*How will the development of artificial intelligence impact systems engineering?<br />
**Will AI change the way we think of systems architecture? <br />
**How will we perform V&V of an AI system? <br />
*How will the push towards vertically integrated digital engineering influence systems engineering?<br />
*How are social features becoming more tightly connected to technical features of systems, and how is the modeling of socio-technical systems infusing into practice?<br />
<br />
==Overview of Emerging Research==<br />
As these emerging topics gain visibility, researchers will begin to investigate them. Corporate R&D may do early work, but academia and government will formalize this research. The Emerging Research section is a place to gather the references to this disparate work into a single repository to better inform systems engineers working on related topics. The references are collected from the following sources: <br />
* PhD dissertations<br />
* INCOSE publications and events <br />
* IEEE publications and events<br />
* Research funded by National Science Foundation (NSF) – Engineering Design and Systems Engineering (EDSE)<br />
* Research funded by Systems Engineering Research Center (SERC)<br />
<br />
==References==<br />
===Works Cited===<br />
Hall, Arthur D. (1962). ''A Methodology for Systems Engineering.'' New York, NY, USA: Van Nostrand.<br />
<br />
===Additional References===<br />
Engstrom, E.W. (1957). "Systems engineering: A growing concept," in Electrical Engineering, vol. 76, no. 2, pp. 113-116, Feb. 1957, doi: 10.1109/EE.1957.6442968.<br />
<br />
Goode, H. Herbert., Machol, R. Engel. (1957). ''System Engineering: An Introduction to the Design of Large-Scale Systems.'' New York, NY, USA: McGraw-Hill.<br />
<br />
Kelly, Mervin J. (1950). “The Bell Telephone Laboratories—An example of an institute of creative technology”. Proceedings of the Royal Society B. Vol. 137, Issue 889. https://doi.org/10.1098/rspb.1950.0050.<br />
<br />
<center>[[Singapore Water Management|< Previous Article]] | [[SEBoK Table of Contents|Parent Article]] | [[Emerging Topics|Next Article >]]</center><br />
<br />
<center>'''SEBoK v. 2.3, released 30 October 2020'''</center><br />
<br />
[[Category: Part 8]]<br />
[[Category:Part]]</div>Hle