Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
224 Cards in this Set
- Front
- Back
Stakeholder Requirements Definition
Activities |
Elicit Stakeholder Requirements
Define Stakeholder Requirements Analyze and Maintain Stakeholder Requirements |
|
Requirements Analysis
Activities |
Define the System Requirements
Analyze and Maintain the System Requirements |
|
Architectural Design
Activities |
Define the Architecture
Analyze and Evaluate the Architecture Document and Maintain the Architecture |
|
Implementation
Activities |
Plan the Implementation
Perform Implementations |
|
Integration
Activities |
Plan Integration
Perform Integration |
|
Verification
|
Plan Verification
Perform Verification |
|
Transition
Activities |
Perform the Transition
Perform the Transition |
|
Validation
Activities |
Plan Validation
Perform Validation |
|
Operation
Activities |
Prepare for Operation
Perform Operational Activation and Check-out Use System for Operations Perform Operational Problem Resolution Support the Customer |
|
Maintenance
Activities |
Plan Maintenance
Perform Maintenance |
|
Disposal
Activities |
Plan Disposal
Perform Disposal Finalize the Disposal |
|
Project Planning
Activities |
Define the Project
Plan the Project Resources Plan the Project Technical and Quality Management Activate the Project |
|
Project Assessment and Control
Activities |
Assess the Project
Control the Project Close the Project |
|
Decision Management
Activities |
Plan and Define Decisions
Analyze the Decision Information Track the Decision |
|
Risk Management
Activities |
Plan Risk Management
Manage the Risk Profile Analyze Risks Treat Risks Monitor Risks Evaluate the Risk Management Process |
|
Configuration Management
Activities |
Plan Configuration Management
Perform Configuration Management |
|
Information Management
Activities |
Plan Information Management
Perform Information Management |
|
Measurement
Activities |
Plan Measurement
Perform Measurement Evaluate Measurement |
|
Acquisition
Activities |
Prepare for the Acquisition
Advertise the Acquisition and Select the Supplier Initiate an Agreement Monitor the Agreement Accept the Product or Service |
|
Supply
Activities |
Identify Opportunities
Respond to a Tender Initiate an Agreement Execute the Agreement Deliver and Support the Product or Service Close the Agreement |
|
Life Cycle Model Management
Activities |
Establish the Process
Assess the Process Improve the Process |
|
Infrastructure Management
Activities |
Establish the Infrastructure
Maintain the Infrastructure |
|
Project Portfolio Management
|
Initiate Projects
Evaluate the Portfolio of Projects Close Projects |
|
Human Resource Management
Activities |
Identify Skills
Develop Skills Acquire and Provide Skills |
|
Quality Management
Activities |
Plan Quality Management
Assess Quality Management Perform Quality Management Corrective Action |
|
Tailoring
Activities |
Identify and Document Tailoring Influences
Take Account of Recommended or Mandated Standards Obtain Input from All Affected Parties Make Tailoring Decisions Tailor the Affected Lifecycle Process |
|
Organizational Project-Enabling Processes
|
Project Portfolio Management,
Infrastructure Management, Life Cycle Model Management, Human Resource Management, Quality Management, |
|
Project Processes
|
Project Planning,
Project Assessment and Control, Decision Management, Risk Management, Configuration Management, Information Management, Measurement, Tailoring, |
|
Technical Processes
|
Stakeholder Requirements Definition,
Requirements Analysis, Architectural Design, Implementation, Integration, Verification, Transition, Validation, Operation, Maintenance, Disposal, |
|
Agreement Processes
|
Acquisition,
Supply, |
|
Acquisition Inputs
|
Acquisition Need
Enabling System Requirements Acquired System Acquisition Proposal |
|
Architectural Design Inputs
|
System Specification
Updated RVTM Life Cycle Constraints Specification Tree System Functional Interfaces System Functions Concept Documents System Requirements System Requirements Traceability |
|
Configuration Management Inputs
|
Change Requests
Configuration Items |
|
Decision Management Inputs
|
Decision Situation
|
|
Disposal Inputs
|
Concept Documents
Validated System |
|
Human Resource Management Inputs
|
Project Status Report
Organization Strategic Plan Project Portfolio Project Human Resource Needs |
|
Implementation Inputs
|
System Architecture Description
System Element Requirements Traceability System Element Descriptions Interface Requirements Verification Criteria Validation Criteria System Element Requirements Concept Documents |
|
Information Management Inputs
|
Information Items
|
|
Infrastructure Management Inputs
|
Project Infrastructure Needs
Organization Infrastructure Needs |
|
Integration Inputs
|
System Elements
System Element Descriptions System Element Documentation Accepted System |
|
Life Cycle Model Management Inputs
|
Interface Requirements
Infrastructure Management Report Organization Tailoring Strategy Measurement Evaluation Report Measurement Report Industry Standards Organization Strategic Plan Corrective Actions Quality Management Guidelines |
|
Maintenance Inputs
|
Validated System
Initial Trained Operators and Maintainers Concept Documents Operator/Maintainer Training Validation Report |
|
Measurement Inputs
|
Measures of Performance Data
Measures of Effectiveness Needs Measures of Performance Needs Technical Performance Measures Needs Technical Performance Measures Data Project Performance Measures Needs Project Performance Measures Data Organizational Process Performance Measures Needs Organizational Process Performance Measures Data Measures of Effectiveness Data |
|
Operation Inputs
|
Validation Report
Operator/Maintainer Training Initial Trained Operators and Maintainers Validated System Concept Documents |
|
Project Assessment and Control Inputs
|
Project Procedures
Project Reports Project Budget Project Schedule Work Breakdown Structure (WBS) Systems Engineering Plan (SEP) Configuration Baselines Risk Profile Project Plan |
|
Project Planning Inputs
|
Skilled Personnel
Project Tailoring Strategy Project Portfolio Supply Proposal Standard Life Cycle Models Corrective Actions |
|
Project Planning Inputs
|
Source Documents
Strategy Documents |
|
Project Portfolio Management Inputs
|
Project Status Report
Supply Strategy Supply Agreement Organization/Enterprise Portfolio Direction & Constraints |
|
Quality Management Inputs
|
Organization Strategic Plan
Customer Satisfaction Inputs Project Status Report Process Review Criteria QMP Initial RVTM Stakeholder Requirements Concept Documents Stakeholder Requirements Traceability Measures of Effectiveness |
|
Risk Management Inputs
|
Candidate Risks and Opportunities
|
|
Stakeholder Requirements Definition Inputs
|
Project Constraints
Stakeholder Needs Source Documents |
|
Supply Inputs
|
Supply Payment
Supply RFP Organization Strategic Plan |
|
Tailoring Inputs
|
Acquisition Agreement
Supply Agreement Industry Standards Organization Strategic Plan |
|
Transition Inputs
|
Concept Documents
Verification Report Verified System Final RVTM Initial Trained Operators and Maintainers Operator/Maintainer Training Interface Control Documents |
|
Validation Inputs
|
Stakeholder Requirements
Transition Report Installed System Validation Criteria Concept Documents Final RVTM |
|
Verification Inputs
|
Verification Criteria
Interface Requirements Integrated System Interface Control Documents Integration Report Updated RVTM Specification Tree System Requirements |
|
Acquisition Outputs
|
Accepted System
Acquisition Strategy Acquisition Request for Proposal Acquisition Agreement Acquisition Report Acquisition Payment |
|
Architectural Design Outputs
|
System Element Descriptions
Technical Performance Measures Data System Element Requirements Traceability System Element Requirements Interface Requirements System Architecture Description Technical Performance Measures Needs |
|
Configuration Management Outputs
|
Configuration Management Strategy
Configuration Baselines Configuration Management Report |
|
Decision Management Outputs
|
Decision Report
Decision Management Strategy |
|
Disposal Outputs
|
Disposal Procedure
Disposal Report Disposal Constraints on Design Disposal Enabling System Requirements Disposal Strategy Disposed System |
|
Human Resource Management Outputs
|
Skill Development Plan
Skilled Personnel Skills Matrix |
|
Implementation Outputs
|
Operator/Maintainer Training
Implementation Enabling System Requirements Initial Trained Operators and Maintainers System Element Documentation System Elements Implementation Constraints on Design Implementation Strategy |
|
Information Management Outputs
|
Information Repository
Information Management Report Information Management Strategy |
|
Infrastructure Management Outputs
|
Infrastructure Management Report
Organization Infrastructure |
|
Infrastructure Management Outputs
|
Project Infrastructure
|
|
Integration Outputs
|
Integration Report
Integration Procedure Integration Constraints on Design Integration Enabling System Requirements Integration Strategy Interface Control Documents Integrated System |
|
Life Cycle Model Management Outputs
|
Standard Life Cycle Models
Organizational Process Performance Measures Data Process Review Criteria "Organization/Enterprise Policies, Procedures, and Standards" Organizational Process Performance Measurement Needs |
|
Maintenance Outputs
|
Maintenance Strategy
Maintenance Enabling System Requirements Maintenance Report Maintenance Constraints on Design Maintenance Procedure |
|
Measurement Outputs
|
Measurement Repository
Measurement Strategy Measurement Evaluation Report Measurement Report |
|
Operation Outputs
|
Operation Enabling System Requirements
Operation Strategy Operation Report Operation Constraints on Design |
|
Project Assessment and Control Outputs
|
Change Requests
Project Status Report Project Performance Measures Data Project Directives |
|
Project Planning Outputs
|
Project Constraints
Project Schedule Systems Engineering Plan (SEP) Work Breakdown Structure (WBS) Project Procedures and Standards Project Performance Measures Needs Acquisition Need Project Infrastructure Needs Project Human Resource Needs QMP Project Plan |
|
Project Portfolio Management Outputs
|
Project Portfolio
Organization Infrastructure Needs Organization Strategic Plan |
|
Quality Management Outputs
|
Customer Satisfaction Report
Corrective Actions Quality Management Guidelines |
|
Requirements Analysis Outputs
|
System Functions
Measures of Performance Needs Measures of Performance Data Verification Criteria System Requirements Traceability Updated RVTM System Specification Specification Tree System Functional Interfaces System Requirements |
|
Risk Management Outputs
|
Risk Report
Risk Profile Risk Strategy |
|
Stakeholder Requirements Definition Outputs
|
Concept Documents
Measures of Effectiveness Needs Verification Criteria Stakeholder Requirements Traceability Initial RVTM Stakeholder Requirements Measures of Effectiveness Data |
|
Supply Outputs
|
Supply Proposal
Supply Report Supplied System Supply Agreement Supply Strategy |
|
Tailoring Outputs
|
Project Tailoring Strategy
Organization Tailoring Strategy |
|
Transition Outputs
|
Installed System
Transition Report Transition Strategy Transition Enabling System Requirements Transition Constraints on Design Installation Procedure |
|
Validation Outputs
|
Validation Report
Validation Constraints on Design Validation Enabling System Requirements Validation Strategy |
|
Validation Outputs
|
Validated System
Validation Procedure |
|
Verification Outputs
|
Verification Report
Verification Strategy Verification Procedure Verification Enabling System Requirements Verified System Final RVTM Verification Constraints on Design |
|
The appropriate degree of formality in the
execution of any SE process activity is determined by: |
1. The need for communication of what is being done (across members of a project team, across organizations, or over time to support future activities)
2. The level of uncertainty 3. The degree of complexity 4. The consequences to human welfare. |
|
Important Dates in the Origins of SE as a Discipline
|
1829 Rocket locomotive; progenitor of main‐line railway motive power
1937 British multi‐disciplinary team to analyze the air defense system 1939‐1945 Bell Labs supported NIKE development 1951‐1980 SAGE Air Defense System defined and managed by MIT 1956 Invention of systems analysis by RAND Corporation 1962 Publication of A Methodology for Systems Engineering 1969 Jay Forrester (Modeling Urban Systems at MIT) 1990 NCOSE established 1995 INCOSE emerged from NCOSE to incorporate International view 1969 Mil‐Std 499 1974 Mil‐Std 499A 1979 Army Field Manual 770‐78 1994 Mil‐Std 499B (not released) 1994 Perry Memorandum urges military contractors to adopt commercial practices. EIA 632 IS (Interim Standard) and IEEE 1220 (Trial Version) instead of Mil‐Std 499B 1998 EIA 632 Released 1999 IEEE 1220 Released 2002 Release of ISO/IEC 15288:2002 2008 Release of ISO/IEC 15288:2008 |
|
Challenges that influence the development of systems of systems
|
1. System elements operate independently
2. System elements have different life cycles 3. The initial requirements are likely to be ambiguous 4. Complexity is a major issue 5. Management can overshadow engineering 6. Fuzzy boundaries cause confusion 7. SoS engineering is never finished |
|
Three Aspects of the Life Cycle
|
business aspect (business case),
the budget aspect (funding), and the technical aspect (product) |
|
Decision gates address the following questions:
|
• Does the project deliverable still satisfy the business case?
• Is it affordable? • Can it be delivered when needed? |
|
The primary objectives of decision gates are to:
|
• Ensure that the elaboration of the business and technical baselines are acceptable and will lead to satisfactory V&V
• Ensure that the next step is achievable and the risk of proceeding is acceptable • Continue to foster buyer and seller teamwork • Synchronize project activities. |
|
Two decision gates in any project
|
authority to proceed and
final acceptance of the project deliverable |
|
At each gate the decision options are:
|
• Acceptable – Proceed with the next stage of the project
• Acceptable with reservations – Proceed and respond to action items • Unacceptable: Do not proceed – Continue this stage and repeat the review when ready • Unacceptable: Return to a preceding stage • Unacceptable: Put a hold on project activity • Unsalvageable: Terminate the project. |
|
Decision gate descriptions should identify the:
|
• Purpose of the decision gate
• Host and chairperson • Attendees • Location • Agenda and how the decision gate is to be conducted • Evidence to be evaluated • Actions resulting from the decision gate • Method of closing the review. |
|
Generic life‐cycle stages
|
EXPLORATORY RESEARCH
CONCEPT DEVELOPMENT PRODUCTION UTILIZATION SUPPORT RETIREMENT |
|
Life Cycle Approaches
|
Plan‐Driven Methods
Incremental and Iterative Development Lean Development |
|
The following illustrates some waste considerations for SE practice in each of the LAI waste classifications
|
1. Over‐Processing
2. Waiting 3. Unnecessary Movement 4. Over‐Production 5. Transportation 6. Inventory 7. Defects |
|
Lean Development Principles
|
Value
Value Stream Flow Pull Perfection Respect |
|
Agile Development principles
|
1. Satisfy customer through early and continuous delivery
2. Welcome changing requirements 3. Deliver working software frequently 4. Business people and developers work together 5. Motivated individuals 6. Face-to-face conversations 7. Working software is the primary measure of progress 8. Sustainable development 9. Technical excellence and good design 10. Simplicity 11. Self organizing teams 12. Team reflects on how to become more effective at regular intervals |
|
Stakeholder Requirements Definition common approaches and tips
|
Develop a description of the user community
Formally place stakeholder requirements under configuration control. Establish relationships and communications between systems engineers and stakeholders. Identify all stakeholders. Avoid designing a final solution. Write clearly and create statements with quantifiable values. Capture source and rationale for each requirement. |
|
Systems engineering should support program and project management in defining what must be done and gathering the information, personnel, and analysis tools to define the mission or program objectives.
|
1. Identify users and other stakeholders and understand their needs.
2. Perform mission analysis to establish the operational environment, requirements functionality, and architecture and to assess existing capability. 3. Document the inadequacies or cost of existing systems to perform new mission needs. 4. If mission success is technology driven, develop concepts and document the new capabilities that are made possible by the introduction of new or upgraded technology. 5. Prepare a justification of the need for this mission compared to alternative missions. 6. Prepare the necessary documents to request funding for the first program stage. 7. If system procurement is involved, develop the information needed to release an RFP, establish the selection criteria and perform a source selection. |
|
Successful definition of user and stakeholder needs
|
• User organizations have gained authorization for new system acquisition.
• Program development organizations have prepared a SOW, SRD, and gained approval for new system acquisition. • Potential contractors have submitted a proposal, and have been selected to develop and deliver the system. • If the system is market driven, the marketing group has learned what consumers want to buy. • If the system is market and technology driven, the development team has obtained approval to develop the new system from the corporation. |
|
Examples of source documents
|
1. New or updated customer needs, requirements, and objectives in terms of missions, ConOps, MOEs, technical performance, utilization environments, and constraints
2. Technology base data including identification of key technologies, performance, maturity, cost, and risks 3. Requirements from contractually cited documents for the system and its configuration items (CIs) 4. Technical objectives 5. Records of meetings and conversations with the customer. |
|
Prerequisites for the successful performance of establishing the requirements database
|
1. Empower a systems analysis team with the authority and mission to carry out the activity.
2. Assign experienced Systems Engineer(s) to lead the team. 3. Assign experienced team members from relevant engineering, test, manufacturing, and operations (including logistics) disciplines to be available to the team. 4. Establish the formal decision mechanism (e.g., a design decision database) and any supporting tools; select and obtain necessary SE tools for the activity. 5. Complete the relevant training of team members in the use of tools selected for the activity. 6. Define the formats of the output deliverables from this activity (to permit the definition of any database schema tailoring that may be needed). |
|
The following guidance has proven helpful in establishing a Requirements Database
|
1. Take the highest priority source document identified and ensure that it is recorded in the database
2. Analyze the content of each parent requirement produced in the previous step. |
|
Source Information
|
1. Project requirements
2. Mission requirements 3. Customer specified constraints 4. Interface, environmental, and non‐functional requirements 5. Unclear issues discovered in the Requirements Analysis Process 6. An audit trail of the resolution of the issues raised 7. V&V methods required by the customer 8. Traceability to source documentation 9. Substantiation (verification) that the database is a valid interpretation of user needs. |
|
Understanding operational needs typically produces
|
• A source of specific and derived requirements that meet the customer and user needs and objectives.
• Invaluable insight for Integrated Product Development Team (IPDT) members as they design, develop, verify, and validate the system. • Diminished risk of latent system defects in the delivered operational systems. |
|
A ConOps document typically comprises the following
|
• A top‐level operational concept definition containing approved operational behavior models for each system operational mode (which can be documented as functional flow diagrams), supporting time lines, and event transcripts, which are fully traceable from source requirements
• Context diagrams • Mission Analyses • ConOps objectives |
|
Other ConOps objectives
|
1. To provide traceability between operational needs and the captured source requirements.
2. To establish a basis for requirements to support the system over its life, such as personnel requirements, support requirements, etc. 3. To establish a basis for verification planning, system‐level verification requirements, and any requirements for environmental simulators. 4. To generate operational analysis models to test the validity of external interfaces between the system and its environment, including interactions with external systems. 5. To provide the basis for computation of system capacity, behavior under/overload, and mission‐effectiveness calculations. 6. To validate requirements at all levels and to discover implicit requirements overlooked from other sources. |
|
A ConOps is established as follows
|
1. Start with the source operational requirements; deduce a set of statements describing the higher‐level, mission‐oriented system objectives and record them. The following typical source documents serve as inputs for the ConOps:
• System business case • Statement of User Need • Technical operational requirements • System operational requirements documents • Statement of operational objectives • SOW • Customer Standard Operating Procedures. 2. Review the system objectives with end users and operational personnel and record the conflicts. 3. Define and model the operational boundaries. 4. For each model, generate a context diagram to represent the model boundary. 5. Identify all of the possible types of observable input and output events that can occur between the system and its interacting external systems. 6. If the inputs/outputs are expected to be significantly affected by the environment between the system and the external systems, add concurrent functions to the context diagram to represent these transformations and add input and output events to the database to account for the differences in event timing between when an output is emitted to when an input is received. 7. Record the existence of a system interface between the system and the environment or external system. 8. For each class of interaction between a part of the system and an external system, create a functional flow diagram to model the sequence of interactions as triggered by the stimuli events generated by the external systems. 9. Add information to trace the function timing from performance requirements and simulate the timing of the functional flow diagrams to confirm operational correctness or to expose dynamic inconsistencies. Review results with users and operational personnel. 10. Develop timelines, approved by end users, to supplement the source requirements. |
|
The following measures are often used to gauge the progress and completion of the ConOps activity
|
1. Functional Flow Diagrams required and completed
2. Number of system external interfaces 3. Number of scenarios defined 4. Number of unresolved source requirement statements 5. Missing source documents 6. Number of significant dynamic inconsistencies discovered in the source requirements. |
|
Sources of Requirements
|
External Environment
Organization’s Environment Project Environment Project Support Process Groups for Engineering Systems Organizational Support |
|
Characteristics of Good Requirement
|
Necessary
Implementation Independent Clear and Concise Complete Consistent Achievable Traceable Verifiable |
|
The use of certain words should be avoided in requirements in that they convey uncertainty. These include
|
• Superlatives – such as "best" and "most"
• Subjective language – such as "user friendly," "easy to use," and "cost effective" • Vague pronouns – such as “he,” “she,” “this,” “that,” “they,” “their,” “who,” “it,” and “which” • Ambiguous adverbs and adjectives – such as "almost always," "significant," "minimal," “timely,” “real‐time,” “precisely,” “appropriately,” “approximately,” “various,” “multiple,” “many,” “few,” “limited,” and “accordingly” • Open‐ended, non‐verifiable terms – such as "provide support," "but not limited to," and "as a minimum" • Comparative phrases – such as "better than" and "higher quality" • Loopholes – such as "if possible," "as appropriate," and "as applicable" • Other indefinites –such as “etc.,” ”and so on,” “to be determined (TBD),” “to be reviewed (TBR),” and “to be supplied (TBS).” TBD, TBR, and TBS items should be logged and documented in a table at the end of the specification with an assigned person for closure and a due date. |
|
Characteristics of the set of requirements
|
Complete
Consistent Affordable Bounded |
|
Typical constraints on the system may include
|
• Cost and schedule
• Mandated use of commercial off‐the‐shelf (COTS) equipment • Operational environment and use of pre‐existing facilities and system elements • Operational interfaces with other systems or organizations. |
|
The ConOps, for example, can be helpful in identifying adverse consequences of derived requirements
|
• Is unnecessary risk being introduced?
• Is the technology producible? • Are sufficient resources available to move forward? • Are trade studies needed to determine appropriate ranges of performance? |
|
Requirements Analysis Process steps (1-6)
|
1. Establish constraints on the system
• Cost • Schedule • Use of COTS equipment • Use of Non‐Developmental Items (NDI) • Use of Existing Facilities • Operational Interfaces with other systems or organizations • Operational environment. 2. Examine and characterize the mission in measurable requirement categories 3. Using detailed functional analysis extract new functional requirements 4. larger systems may require a high‐level system simulation evolved from the system architecture 5. Examine any adverse consequences introduced by deriving and incorporating requirements. – For example: • Is unnecessary risk being introduced? • Is the system cost within budget limitations and the budget profile? • Will the technology be ready for production? • Are sufficient resources available for production and operation? 6. Is the schedule realistic and achievable (be sure to consider downstream activities such as design and verification associated with the requirements)? |
|
Requirements Analysis Process steps (7-9)
|
7. Where existing user requirements cannot be confirmed, perform trade studies to determine more appropriate requirements, and achieve the best‐balanced performance at minimum cost. Where critical resources (e.g., weight, power, memory, and throughput) must be allocated, trade studies may be required to determine the proper allocation.
8. Incorporate revised and derived requirements and parameters resulting from the Requirements Analysis Process into the requirements database and maintain traceability to source requirements. 9. Prepare and submit the specification documents (see Sections 4.2.2.6 and 4.2.2.7) to all organizations for review. – Upon approval, the documents are entered into the formal release system, and maintained under configuration management control. Any further changes will require Configuration Control Board (CCB) approval. |
|
The following measures are often used to gauge the progress and completion of this requirements analysis activity
|
1. Number or percent of requirements defined, allocated, and traced
2. Time to issue draft 3. Number of meetings held 4. Number and trends of TBD, TBR, and TBS requirements 5. Number of requirement issues identified (e.g. requirements not stated in a verifiable way) 6. Number and frequency of changes (additions, modifications, and deletions). |
|
Some of the major challenges in performing this requirements engineering task are as follows
|
• An envisioned system is seldom, if ever, designed to work totally independent of the other systems in the customer's environment. This means that the environment in which the system is to operate must be known and documented as thoroughly as the system itself.
• COTS solutions play a major role in defining the system. While requirements are supposed to be independent of solution, being able to achieve an implementable solution within the resource constraints available is the primary requirement. • Every aspect of an envisioned system's function and performance cannot practically be specified. Thus, a level of requirement specification must be established that represents a cost‐effective balance between the cost of generating, implementing, and verifying requirements versus the risk of not getting a system that meets customers’ expectations. In each case, the cost of non‐performance is a major driver |
|
Specification Tree and Specification Development activity
|
1. Derive the Specification Tree from the system architecture configuration
2. Create an outline for each specification in the Specification Tree 3. Craft requirements for each specification, completing all flowdown and accommodating derived requirements emerging from the definitions of each CI |
|
For the Specification Tree (completeness and balance)
|
1. Its completeness as measured by the inclusion of all items required in the system
2. Its balance as determined by its span of control and fan‐out from each entity. |
|
Specification metrics
|
1. Number of TBDs and TBRs in specifications (goal is zero)
2. Number of requirements in the specification (50 to 250 functional/performance requirements is the ideal range) 3. Stability of the requirements as the development progresses. |
|
Traceability is not an end goal in and of itself but, rather, a tool that can be used to
|
1. Improve the integrity and accuracy of all requirements, from the system level all the way down to the lowest CI
2. Allow tracking of the requirements development and allocation and generating overall measures 3. Support easier maintenance and change implementation of the system in the future. |
|
Traceability should be maintained throughout all levels of documentation, as follows
|
1. Allocate all system requirements to hardware, software, or manual operations, facilities, interfaces, services, or others as required
2. Ensure that all functional and performance requirements or design constraints, either derived from or flowed down directly to a lower system architecture element, actually have been allocated to that element 3. Ensure that traceability of requirements from source documentation is maintained through the project’s life until the verification program is completed and the system is accepted by the customer 4. Ensure that the history of each requirement on the system is maintained and is retrievable. |
|
The tool should generate the following directly from the database
|
a. Requirements Statements with PUIDs
b. RVTM –a list of requirements, their verification attributes, and their traces c. Requirements Traceability Matrices (RTM) – list of requirements and their traces d. Lists of TBD, TBR, and TBS issues e. Specifications f. Requirements measures (e.g. requirements stability). |
|
Establishing and maintaining requirement traceability
|
1. While requirements can be traced manually on small projects, such an approach is generally not considered cost‐effective.
2. Each requirement must be traceable using a PUID. 3. The specification tree provides the framework for parent‐child vertical traceability. 4. The functions and sub‐functions for which each system area is responsible, and the top level system requirements associated with those functions, must be identified and documented in the traceability tool. 5. The most difficult part of requirements flowdown can be the derivation of new requirements 6. The specifications should be reviewed and audited as they are produced to verify that the allocation activity is correct and complete. 7. Once allocations are verified, RTMs are generated directly from the database and maintained under configuration management control. These matrices are used as part of the audit process. |
|
The following measures are often used to gauge the progress and completion of the allocation and traceability activity
|
1. Number and trends of requirements in the database
2. Number of TBD, TBR, and TBS requirements 3. Number (or percent) of system requirements traceable to each lower level and number (percent) of lower level requirements traceable back to system requirements. |
|
To be non‐ambiguous, requirements must be broken down into constituent parts in a traceable hierarchy such that each individual requirement statement is
|
• Clear, unique, consistent, stand‐alone (not grouped), and verifiable
• Traceable to an identified source requirement • Not redundant, nor in conflict with, any other known requirement • Not biased by any particular implementation. |
|
The overall objective is to create a System Architecture (defined as the selection of the types of system elements, their characteristics, and their arrangement) that meets the following criteria:
|
1. Satisfies the requirements (including external interfaces)
2. Implements the functional architecture 3. Is acceptably close to the true optimum within the constraints of time, budget, available knowledge and skills, and other resources 4. Is consistent with the technical maturity and acceptable risks of available elements |
|
As Rechtin and Maier define it, systems architecting builds on four methodologies:
|
• Normative (solution‐based), such as building codes and communication standards.
• Rational (method‐based), such as systems analysis and engineering. • Participative (stakeholder‐based), such as concurrent engineering and brainstorming. • Heuristic (lessons‐learned), such as “Simplify. Simplify. Simplify.” |
|
System Architecture selection criteria are the quantifiable consequences of system implementation and operation. Selection criteria include:
|
1. Measures of the system’s ability to fulfill its mission as defined by the requirements
2. Ability to operate within resource constraints 3. Accommodation of interfaces 4. Ability to adapt to projected future needs and interoperating systems (i.e., system robustness) 5. Costs (economic and otherwise) of implementing and operating the system over its entire life cycle 6. Side effects, both positive and adverse, associated with particular architecture options 7. Measures of risk 8. Measures of quality factors 9. Measures of subjective factors that make the system more or less acceptable to customers, users, or clients (e. g., aesthetic characteristics). |
|
System Architecture options should satisfy the following criteria:
|
• With reasonable certainty, spans the region of design space that contains the optimum
• Supports analysis that efficiently closes on the optimum • Contains all relevant design features necessary to provide a firm baseline for the subsequent round of system definition at the next level of detail. |
|
Process for defining system elements:
|
1. Create a list of the elements that will make up the system.
2. Identify a set of option descriptors for each element in the list. 3. Define the envelope of design space 4. Develop a process to generate a range of element options 5. Borrow from similar existing systems or create new element options through application of the appropriate structured creativity methods. 6. Generate a set of element options that populates the design space envelope. 7. Develop the attendant data describing each element option and its interfaces with other elements, as needed, to support the selection process and subsequent system definition activity. |
|
System element descriptive and supporting documentation should provide:
|
1. Set of descriptors that define the dimensions of the design space.
2. Set of element options, each characterized by a description of its salient features, parameter values, and interactions with other elements 3. Supporting documentation of the rationale 4. Identification of design drivers 5. Documented assurance, with reasonable certainty for the set of options as a whole, that a basis has been established for efficient selection of the optimum architecture. |
|
Synthesize Multiple System Architectures: Key tasks associated with this activity are:
|
1. Assemble candidate System Architectures
2. Verify that the resulting System Architecture options meet the following criteria 3. Ensure in‐process validation by involving the customer or user in this process 4. Screen the set of System Architecture options generated so far, retaining only a reasonable number of the best |
|
Steps for Analyzing and Selecting Preferred System Architecture/Element Solution
|
1. Create models that map each option’s characteristics onto measures of success against the criteria
2. Use Trade Studies methods to compare and rank the options. 3. Modify options or combine the best features of several options to correct shortcomings and advance the capability of the leading contenders 4. Perform sensitivity analysis to test the robustness of the final selection. 5. Document the process |
|
After the System Architecture has been selected, sufficient detail must be developed on the elements to
|
(1) ensure that they will perform as an integrated system within their intended environment and
(2) enable subsequent development or design activity, as necessary, to fully define each element. |
|
The steps to define, refine, and integrate the system physical configuration are as follows:
|
1. Create a system‐level description of system operation, using appropriate tools and notation, to enable a thorough analysis of the system’s behavior at the interfaces among all of its elements.
2. Enter the available data about the elements into the system‐level description. 3. Perform design activity on the elements as needed to provide the additional data needed for the system‐level description 4. Perform liaison with customer representatives regarding definition of interfaces with the system’s operating environment throughout its life cycle. 5. Analyze system operation to verify its compliance with requirements. |
|
Systems engineering activities conducted in implementing Requirements and Design Feedback Loops are as follows:
|
1. Determine how the SE process is tailored (see Chapter 8) for different levels of the project.
2. Audit the system requirements. 3. Conduct design reviews at the appropriate points in the development effort. 4. Iterate between systems (i.e., hardware and software), design, and manufacturing functions. 5. Audit the design and manufacturing process. 6. Iterate with other parts of the SE process. 7. Interface with specialty engineering groups and subcontractors to ensure common understanding across disciplines. 8. Update models as better data becomes available. |
|
The Implementation Process typically focuses on the following three forms of system elements:
|
• Hardware/Physical – Output is fabricated hardware or physical element
• Software – Output is software code and executable images • Humans (Operators & Maintainers) – Output is procedures and training. |
|
Tasks associated with the System Build activity are as follows:
|
1. Obtain the system hierarchy
2. Determine the interfacing system elements. 3. Ascertain the functional and physical interfaces of the system and system elements 4. Organize ICDs or drawing(s) to document the interfaces and to provide a basis for negotiating the interfaces between the parties to the interfaces. 5. Work with producibility/manufacturing groups to verify functional and physical internal interfaces 6. Conduct internal IFWGs, as required. 7. Review test procedures and plans that verify the interfaces. 8. Audit design interfaces. 9. Ensure that interface changes are incorporated into specifications. |
|
The following tasks are conducted to integrate the system‐of‐interest with external systems:
|
1. Obtain the system hierarchy, the systems and CI design specifications, functional block diagrams, N2 charts, and any other data that define the system structure and its interfaces.
2. Determine the interfacing systems by reviewing the items in step 1 above. 3. Obtain interfacing programs’ ICDs, SEPs, and other relevant interface documents. 4. Ascertain the functional and physical interfaces of the external systems with the subject system. 5. Organize an ICD to document the interfaces and to provide a basis for negotiating the interfaces between the parties to the interfaces. 6. Conduct IFWGs among the parties to the interfaces. 7. Review test procedures and plans that verify the interfaces. 8. Audit design interfaces. 9. Incorporate interface changes into specifications. |
|
Basic verification activities are as follows:
|
• Inspection
• Analysis • Demonstration • Test • Certification |
|
There are four basic test categories. They are:
|
• Development Test
• Qualification Test • Acceptance Test • Operational Test |
|
Models can be used within most systems life‐cycle processes, for example:
|
• Intended usage thus affects MOEs
• Requirements Analysis • Architectural Design • Design & Development • Verification • Operations |
|
Systems Engineering models and simulations typically reflect four paradigms:
|
• Functional analysis with specialty, environmental, and interface engineering models attached (see Section 4.12.2)
• Modern Structured Analysis/Process for System Architecture and Requirements Engineering • Systems Modeling Language (SysML™; see Section 4.12.3) • Context‐sensitive systems, also called complex, adaptive systems, which modify their internal gradients, architecture, and content depending on interactions with their environment. |
|
Models may be made up of one or several of the following types:
|
• Physical (e.g., Wind Tunnel model, Mockups, Acoustic model, structural test model, engineering model, prototypes)
• Graphical (e.g., N2 charts, Behavior diagrams, Program Evaluation Review Technique [PERT] charts, Logic Trees, blueprints) • Mathematical (deterministic; e.g., Eigen value calculations, Dynamic motion, Cost) • Statistical (e.g., Monte Carlo, Process modeling, sequence estimation). |
|
The general steps in the application of modeling and simulation are as follows:
|
1. Select the appropriate type(s) of model (or simulation)
2. Design the model (or simulation) 3. Validate the model (or simulation) 4. Document the model (or simulation) 5. Obtain needed input data and operate the model (or simulation) 6. Evaluate the data to create a recommendation for the decision in question. 7. Review the entire process 8. Evolve the model (or simulation), as necessary |
|
Functions‐Based Systems Engineering Method: Functional analysis/allocation should be conducted iteratively:
|
• To define successively lower‐level functions required to satisfy higherlevel functional requirements and to define alternative sets of functional requirements.
• With requirements analysis, to define mission and environment driven performance and to determine that higher‐level requirements are satisfied. • To flow down performance requirements and design constraints. • With design synthesis, to refine the definition of product and process solutions. |
|
Representative inputs to Functional Analysis/Allocation are as follows:
|
• Functional requirements
• Performance requirements • Program decision requirements (such as objectives to reuse certain hardware and software or use COTS items) • Specifications and Standards requirements • Architectural concepts • ConOps • Constraints |
|
The products of the Functional Analysis/Allocation process
|
• Behavior Diagrams
• Context Diagrams • Control Flow Diagrams • Data Flow Diagrams • Data Dictionaries • Entity Relationship Diagrams • Functional Flow Block Diagrams (FFBD) • Models • Simulation Results • Integrated Definition for Functional Modeling (IDEF) Diagrams |
|
Tools that can be used to perform Functional Analysis/Allocation include:
|
• Analysis tools
• Modeling and Simulation tools • Prototyping tools • Requirements traceability tools. |
|
A description of each function in the hierarchy should be developed to include the following:
|
1. Its place in a network (e.g., FFBD or IDEF0/1 diagrams) characterizing its interrelationship with the other functions at its level
2. The set of functional requirements that have been allocated to it and define what it does 3. Its inputs and outputs, both internal and external. |
|
The following tasks constitute the bulk of the performance and other limiting requirements allocation activity:
|
1. Identify from the SOW all design constraints placed on the program. This particularly includes those from compliance documents.
2. Identify the groups defining constraints and incorporate them into the SE effort. 3. Analyze the appropriate standards and lessons learned to derive requirements to be placed on the hardware and software CI design. 4. Tailor the compliance documents to fit overall program needs. 5. Identify the cost goals allocated to the design. 6. Define system interfaces and identify or resolve any constraints that they impose. 7. Identify any COTS or NDI CIs that must be used and the constraints that they may impose. 8. Document all derived requirements in specifications and ensure that they are flowed down to the lowest CI level. 9. Ensure that all related documents (i.e., operating procedures, etc.) observe the appropriate constraints. 10. Review the design as it evolves to ensure compliance with documented constraints. |
|
The OOSEM objectives are as follows:
|
• Capture and analyze requirements and design information to specify complex systems
• Integration of model‐based systems engineering (MBSE) methods with object‐oriented software, hardware, and other engineering methods • Support system‐level reuse and design evolution. |
|
OOSEM includes the following development activities:
|
• Analyze Needs
• Define System Requirements • Define Logical Architecture • Synthesize Allocated Architectures • Optimize and Evaluate Alternatives • Validate and Verify System. |
|
Behavior diagrams include the following:
|
• Use‐case diagrams
• Activity diagrams • Sequence diagrams • State machine diagrams |
|
Strategy Documents – Include:
|
– Organization Strategic Plan
– Implementation Strategy – Integration Strategy – Verification Strategy – Transition Strategy – Validation Strategy – Operation Strategy – Maintenance Strategy – Disposal Strategy – Decision Management Strategy – Risk Strategy – Configuration Management Strategy – Information Management Strategy – Measurement Strategy – Acquisition Strategy – Supply Strategy – Project Tailoring Strategy |
|
SE and Specialty Engineering areas should also be documented (first 16):
|
1. Organization of the project and how SE interfaces with the other parts of the organization. How are communications at these interfaces handled? How are questions and problems elevated up the organization and resolved?
2. Responsibilities and authority of the key positions 3. Clear system boundaries and scope of the project 4. Project assumptions and constraints 5. Key technical objectives 6. Risk and opportunity plan, assessment, and methodology 7. Validation planning (not just verification planning) 8. Configuration Management planning 9. QA planning 10. Infrastructure support and resource management (i.e., facilities, tools, IT, personnel, etc.) 11. Reliability, availability, maintainability, supportability, and Integrated Logistics Support (ILS) 12. Survivability, including nuclear, biological, and chemical 13. EMC, radio frequency management, and electrostatic discharge 14. Human Engineering and HSI 15. Safety, health hazards, and environmental impact |
|
SE and Specialty Engineering areas should also be documented (17-22):
|
16. System security
17. Producibility 18. Test and evaluation 19. Testability and integrated diagnostics 20. Computer resources 21. Transportability 22. Other engineering specialties bearing on the determination of performance and functional requirements. |
|
The system being proposed may be complex enough that the customer will require training to use it. A plan for this training is required in the SEP and should include the following:
|
1. Analysis of performance
2. Behavior deficiencies or shortfalls 3. Required training to remedy deficiencies or shortfalls 4. Schedules to achieve required proficiencies. |
|
In the early 1990s, companies began to discover that they really could be more productive and reduce the risks inherent in concurrent product development if they moved away from the traditional hierarchical management structure and organized into Integrated Product Teams (IPTs). Some of the greatest productivity gains came in three areas:
|
• Unleashing the team’s ingenuity through decentralized processes
• Avoidance of previous problems through new, creative approaches • Better integration between engineering and manufacturing. |
|
Integrated Product & Process Development, or IPPD. The following definitions apply to this concept:
|
• Integrated Product Development Team (IPDT)
• Integrated Product & Process Development (IPPD) • Concurrent Engineering |
|
The objectives of using IPPD are as follows:
|
• Reduce time to market
• Improve product quality • Reduce waste • Save costs through the complete integration of SE life‐cycle processes. |
|
As noted above, industry has learned that IPDTs, using best practices and continuous improvement, achieve significant process improvements resulting in:
|
• Seamless interfaces within the teams
• Reduced engineering design time • Fewer problems in transition from engineering to manufacturing • Reduced development time and cost. |
|
The general approach is to form cross‐functional IPDTs for all products and services. There are typically three types of IPDTs:
|
1. Systems Engineering and Integration Team (SEIT)
2. Product Integration Team (PIT) 3. Product Development Team (PDT). |
|
The basic steps necessary to organize and run an IPDT on a project are as follows. Each step is discussed in turn, with a summary of the key activities that should take place during the step.
|
1. Define the IPDT teams for the project
2. Delegate responsibility and authority to IPDT leaders 3. Staff the IPDT 4. Understand the team’s operating environment 5. Plan and conduct the “Kick‐Off Meeting” Sustain and evolve the team throughout the project 6. Train the team 7. Define the team vision and objectives 8. Have each team expand the definition of its job 9. Establish an expectation of routine Process Assessment and Continuous Improvement 10. Monitor team progress via measures and reports 11. Sustain and evolve the team throughout the project 12. Document team products 13. Close the project and conduct follow‐up activities |
|
Skinner lists ten principles of good decision making:
|
1. Use a value creation lens for developing and evaluating opportunities
2. Clearly establish objectives and trade‐offs 3. Discover and frame the real problem 4. Understand the business situation 5. Develop creative and unique alternatives 6. Identify experts and gather meaningful and reliable information 7. Embrace uncertainty as the catalyst of future performance 8. Avoid “analysis paralysis” situations 9. Use systemic thinking to connect current to future situations 10. Use dialog to foster learning and clarity of action. |
|
Additional decision analysis techniques include:
|
1. Sensitivity Analysis — looks at the relationships between the outcomes and their probabilities to find how “sensitive” a decision point is to the relative numerical values.
2. Value of Information Methods — whereby expending some effort on data analysis and modeling can improve the optimum expected value. 3. Multi‐attribute Utility Analysis — develops equivalencies between dissimilar units of measure. |
|
It is useful to consider trade studies in three levels of formality:
|
• Formal
• Informal • Mental |
|
A recent study reported that the following activities can be found in most trade study processes:
|
1. Frame the decision context, scope, constraints
2. Establish communications with stakeholders 3. Define evaluation criteria (i.e., musts and wants) and weights, where appropriate 4. Define alternatives and select candidates for study 5. Define measures of merit and evaluate selected candidates 6. Analyze the results, including sensitivity analyses, and select best alternative 7. Investigate the consequences of implementation 8. Review results with stakeholders and re‐evaluate, if required 9. Use scenario planning to verify assumptions about the future. |
|
The measurement of risk has two components:
|
• The likelihood that an event will occur
• The undesirable consequence of the event if it does occur. |
|
There are four basic approaches to treat risk:
|
• Avoid the risk through change of requirements or redesign
• Accept the risk and do no more • Control the risk by expending budget and other resources to reduce likelihood and/or consequence • Transfer the risk by agreement with another party that it is in their scope to mitigate. |
|
The following steps can be taken to avoid or control unnecessary risks:
|
• Requirements scrubbing
• Selection of most promising options • Staffing and team building |
|
For high‐risk technical tasks, risk avoidance is insufficient and can be supplemented by the following approaches:
|
• Early procurement
• Initiation of parallel developments • Implementation of extensive analysis and testing • Contingency planning. |
|
The most desirable outcomes of an ECP cycle are:
|
1. System functionality is altered to meet a changing requirement
2. New technology or a new product extends the capabilities of the system beyond those initially required in ways that the customer desires 3. The costs of development, or of utilization, or of support are reduced 4. The reliability and availability of the system are improved. |
|
Configuration Management must, therefore, apply technical and administrative direction, surveillance, and services to do the following:
|
• Identify and document the functional and physical characteristics of individual CIs such that they are unique and accessible in some form
• Assign a unique identifier to each version of each CI • Establish controls to allow changes in those characteristics • Concur in product release and ensure consistent products via the creation of baseline products • Record, track, and report change processing and implementation status and collect measures pertaining to change requests or problems with the product baseline • Maintain comprehensive traceability of all transactions. |
|
The types of impacts the review board should assess typically include the following:
|
• All parts, materials, and processes are specifically approved for use on the project
• The design depicted can be fabricated using the methods indicated • Project quality and reliability assurance requirements are met • The design is consistent with interfacing designs. |
|
The following forms provide an organized approach to changing hardware, software, or documentation:
|
• Problem/Change Reports
• Specification Change Notice (SCN) • Engineering Change Proposals • Engineering Change Requests (ECR) • Request for Deviation/Waiver |
|
Suggested measures for consideration include the following:
|
• Number of changes processed, adopted, rejected, and open
• Status of open change requests • Classification of change requests summary • Number of deviations or waivers by CI • Number of problem reports open, closed, and in‐process • Complexity of problem reports and root cause • Labor associated with problem resolution and verification stage when problem was identified • Processing times and effort for deviations, waivers, ECPs, SCNs, ECRs, and Problem Reports • Activities causing a significant number of Change Requests; and rate of baseline changes. |
|
The following are important terms in Information Management:
|
• Information is what an organization has compiled or its employees know.
• Information assets are intangible information and any tangible form of its representation, including drawings, memos, e‐mail, computer files, and databases. • Information security generally refers to the confidentiality, integrity, and availability of the information assets. • Information security management includes the controls used to achieve information security • Information Security Management System is the life‐cycle approach to implementing, maintaining, and improving the interrelated set of policies, controls, and procedures |
|
Examples of leading indicator measures include the following:
|
• Requirements Trends
• Interface Trends • Requirements Validation Trends |
|
A critical element to each party is the definition of acceptance criteria, such as:
|
1. Percent completion of the SRD
2. Requirements stability and growth measures, such as the number of requirements added, modified, or deleted during the preceding time interval (e.g., month, quarter, etc.) 3. Percent completion of each contract requirements document: SOW, RFP, Contract Data/Document Requirements List (CDRL), etc. |
|
The value propositions to be achieved by instituting organization‐wide processes for use by projects are as follows:
|
1. Provide repeatable/predictable performance across the projects in the organization (this helps the organization in planning and estimating future projects and in demonstrating reliability to customers)
2. Leverage practices that have been proven successful by certain projects and instill those in other projects across the organization (where applicable) 3. Enable process improvement across the organization 4. Improve ability to efficiently transfer staff across projects as roles are defined and performed consistently 5. Improve start up of new projects (less re‐inventing the wheel). |
|
The basic requirements for standard and project‐tailored SE process control, based on CMMI®, are as follows:
|
1. SE processes shall be identified for use on projects.
2. Implementation and maintenance of SE processes shall be documented. 3. Inputs and outputs shall be defined for SE subprocesses. 4. Entrance and exit criteria shall be defined for SE process major activities. 5. Projects shall use a defined set of standard methods or techniques in the SE process. 6. Tailoring guidelines shall be used to permit the standard process to meet project‐specific needs. 7. Project management shall identify what parts of the standard SE process have been tailored to meet project‐specific needs. 8. Strengths and weaknesses in the SE process shall be assessed. 9. The SE process shall be periodically assessed. 10. The SE process shall be compared to benchmark processes used by other organizations. |
|
In addition, basic requirements specifically for SE process improvement control from these standards are as follows:
|
1. Organization best practices shall be identified and communicated to projects.
2. The standard SE process shall identify areas for future improvement. 3. SE process users shall be able to identify proposed improvements. 4. Compliance with improvement processes, plans, and practices shall be verified. 5. The project‐tailored SE improvement process shall include a means for evaluating its effectiveness. 6. The project‐tailored SE improvement process shall include a means for making needed improvements. 7. The standard SE process work products shall be reviewed and results used to improve the process. 8. The standard SE process compliance shall be reviewed and results used to improve the process. |
|
Process Compliance Reviews (PCR) The PCR should cover at least the following:
|
• Identify strengths and weaknesses in the SE process and its improvement process.
• Identify key process elements that need to be followed in large and/or small projects • Identify areas for future improvement • Address the effectiveness of the tailored improvement process • Address the conduct of, defects in, and improvements to the SE improvement process • Review SE work products to identify potential trends indicating possible systemic issues • Review the results of PCRs to identify potential trends indicating possible systemic issues • Review a sampling of in‐process reviews to identify potential trends indicating possible systemic issues • Review the definition and use of SE process measures. |
|
Feedback, minutes, and reports from project assessments, audits, formal reviews, in‐process reviews, and PCRs should also be sampled and analyzed, as should the results of training evaluations, action item compliance reports, lessons learned reports, and best practices. Analyses should address at least the following issues:
|
1. Is the SE process effective and useful (e.g., are we getting what we need from it)?
2. Can the SE process be improved (e.g., are there process elements that were a “waste of time”, or that should have been or could have been done better)? 3. What can we change in the SE process to make it better (e.g., what could we do to eliminate the recorded action items or defects)? 4. What is the productivity of the standard major SE process elements? 5. Are the SE support tools and facilities effective and useful? 6. Is information being collected on the effectiveness and usefulness of the SE process? 7. Is information being used to improve the effectiveness and usefulness of the SE process? |
|
The Project Portfolio Management Process also performs ongoing evaluation of the projects in its portfolio. Based on periodic assessments, projects are determined to justify continued investment if they have the following characteristics:
|
• Progress toward achieving established goals
• Comply with project directives from the organization • Are conducted according to an approved plan • Provide a service or product that is still needed and providing acceptable investment returns. |
|
The following are examples of improper reasons for tailoring:
|
– Not doing things because you do not want to do them
– Not doing things because they are too hard – Not doing things because your boss doesn’t like them. |
|
Influences on tailoring at the organizational level include:
|
• Organization issues
• Organizational learning • Organizational maturity |
|
When contemplating if and how to incorporate a new or updated external standard into and organization, the following should be considered:
|
• Understand the Organization
• Understand the New Standard • Adapt the Standard to the Organization (Not Vice Versa) • Institutionalize Standards Compliance at the “Right” Level • Allow for Tailoring. |
|
Factors that influence tailoring at the project level include:
|
• Stakeholders and customers (e.g., number of stakeholders, quality of working relationships, etc.)
• Project budget, schedule, and requirements • Risk tolerance • Complexity and precedence of the system. |
|
Common traps in the Tailoring Process include, but are not limited to, the following:
|
1. Reuse of a tailored baseline from another system without repeating the Tailoring Process
2. Using all processes and activities “just to be safe” 3. Using a pre‐established tailored baseline 4. Failure to include relevant stakeholders |
|
There are typically three categories of availability that are often expressed as requirements:
|
• Operational availability
• Inherent availability • Measured availability |
|
Another important factor to consider during the design of a system is PHS&T, which includes all special provisions, materials and containers and how the system or the parts thereof shall be handled, distributed, and stored. In addition to the system itself, PHS&T also covers spares and consumables.
|
• Packaging
• Handling • Storage • Transportation |
|
Failure Modes Effects and Criticality Analysis (FMECA) is a means of recording and determining the following:
|
• What functions the equipment is required to perform
• How these functions could fail • Possible causes of the failures • Effects the failures would have on the equipment or system • The criticality of the failures. |
|
The following activities are recommended for performing LCC analyses:
|
1. Obtain a complete definition of the system, elements, and their subsystems.
2. Determine the total number of units of each element 3. Obtain the life‐cycle program schedule 4. Obtain manpower estimates for each stage of the entire program 5. Obtain approximate/actual overhead, general and administrative (G&A) burden rates, and fees that should be applied to hardware and manpower estimates 6. Develop cost estimates for each subsystem of each system element for each stage of the program 7. Document the results of LCC analysts |
|
Common methods/techniques for conducting LCC analyses are as follows:
|
a. Expert Judgment
b. Analogy c. Parkinson Technique d. Price‐To‐Win e. Top‐Down f. Bottom‐Up g. Algorithmic (parametric) h. Design‐to‐Cost or Cost‐As‐An‐Independent‐Variable i. Wide‐band Delphi techniques j. Taxonomy method |
|
Accuracy in the estimates will improve as the system evolves and the data used in the calculation is less uncertain.
|
1. R&D and O&S costs
2. Investment costs 3. Utilization and Support costs 4. Disposal costs |
|
LCC analysis has three important benefits:
|
• All costs associated with a system become visible: upstream; locked in costs, such as R&D; downstream; customer service.12
• Supports an analysis of organization interrelationships. Reinforces the importance of locked in costs, such as R&D; low R&D expenditures may lead to high customer service costs in the future. • Project managers can develop accurate revenue predictions. |
|
The “human” in HSI includes all personnel who interact with the system in any capacity:
|
• System owners
• Users/customers • Operators • Decision‐makers • Maintainers • Support personnel • Trainers • etc. |
|
This broad and disciplined approach focuses on important customer/user issues:
|
• Usability
• Usefulness • Suitability • Effectiveness • Safety and health • Resilience • Understanding of the technological elements • Reliability • Availability • Maintainability • Supportability • Trainability • Cost of ownership. |
|
Systems development organizations routinely focus on short term acquisition cost and schedule, while not paying sufficient attention to the more expensive total ownership costs, for example:
|
• Personnel costs
• Repair and sustainment costs • Cost of redesign and retrofit to correct deficiencies • Training costs • Cost of mishaps • Handling hazardous materials • Disability compensation and liability claims • Environmental clean‐up and disposal costs. |
|
The following human‐centered domains with recognized application to HSI serve as a good foundation of human considerations that need to be addressed in system design and development, but clearly are not all inclusive:
|
• Manpower
• Personnel • Training • Human Factors Engineering (HFE) • Environment • Safety • Occupational Health • Habitability • Survivability |
|
The Training community develops and delivers individual and collective qualification training programs, placing emphasis on options that:
|
Enhance user capabilities to include operator, maintainer, and support personnel)
– Maintain skill proficiencies through continuation training and retraining – Expedite skill and knowledge attainment – Optimize the use of training resources. This “optimal performance” is the achievement of the following: – Conducting task analyses and design trade‐off studies to optimize human activities creating work flow – Making the system intuitive to humans who will use, operate, maintain, and support it – Providing deliberately designed primary, secondary, backup, and emergency tasks and functions – Meeting or exceeding performance goals and objectives established for the system – Conducting analyses to eliminate/minimize the performance and safety risks leading to task errors and system mishaps across all expected operational, support, and maintenance environments. |
|
Prevalent safety issues include the following:
|
– Factors that threaten the safety of personnel and their operation of the system
– Walking/working surfaces, emergency egress pathways; personnel protection devices – Pressure and temperature extremes – Prevention/control of hazardous energy releases (e.g., mechanical, electrical, fluids under pressure, ionizing or nonionizing radiation, fire, and explosions). |
|
Prevalent occupational health issues include the following:
|
– Noise and hearing protection
– Chemical exposures and skin protection – Atmospheric hazards (e.g., Confined space entry and oxygen deficiency) – Vibration, shock, acceleration, and motion protection – Ionizing/non‐ionizing radiation and personnel protection – Human factors considerations that can result in chronic disease or discomfort (e.g., repetitive motion injuries or other ergonomic‐related problems). |
|
Habitability – Involves characteristics of system living and working conditions, such as the following:
|
– Lighting
– Ventilation – Adequate space – Vibration, noise, and temperature control – Availability of medical care, food, and/or drink services – Suitable sleeping quarters, sanitation and personal hygiene facilities, and fitness/recreation facilities. |
|
Human Systems Integration programs have distilled the following HSI activities and associated key actionable tenets:
|
1. Initiate HSI Early and Effectively
2. Identify Issues and Plan Analysis 3. Document HSI Requirements 4. Make HSI a Factor in Source Selection for Contracted Development Efforts 5. Execute Integrated Technical Processes 6. Conduct Proactive Tradeoffs 7. Conduct HSI Assessments |
|
The objective of performing Value Engineering is to improve the economical value of a project, product, or process by reviewing its elements to accomplish the following:
|
• Achieve the essential functions and requirements
• Lower total LCC (resources) • Attain the required performance, safety, reliability, quality, etc. • Meet schedule objectives. |
|
Some of the uses and benefits of VE include:
|
• Clarify objectives
• Solve problems (obtain “buy‐in” to solution) • Improve quality and performance • Reduce costs and schedule • Assure compliance • Identify and evaluate additional concepts and options • Streamline and validate processes and activities • Strengthen teamwork • Reduce risks • Understand customer requirements. |
|
A typical six‐step VE Job Plan is as follows:
|
• Phase 0: Preparation/Planning
• Phase 2: Function Analysis • Phase 3: Creativity • Phase 4: Evaluation • Phase 5: Development • Phase 6: Presentation / Implementation |
|
There is no “correct” FAST model; team discussion and consensus on the final diagram is the goal. Using a team to develop the FAST diagram is beneficial for several reasons:
|
• Applies intuitive logic to verify functions
• Displays functions in a diagram or model • Identifies dependence between functions • Creates a common language for team • Tests validity of functions. |
|
Two types of models are particularly important to SE
|
descriptive and prescriptive
|
|
Two types of prototyping are commonly used
|
rapid and traditional
|
|
SysML Diagram Types
|
Structure Diagram
Block Definition Diagram Package Diagram Internal Block Diagram Parametric Diagram Requirement Diagram Behavior Diagram Activity Diagram Sequence Diagram State Machine Diagram Use Case Diagram |
|
SE Role in the Configuration Management Process
|
1. Make sure the change is necessary
2. Make sure the most cost-effective solution has been proposed. |
|
Three categories of availability
|
• Operational availability
• Inherent availability • Measured availability |
|
Procedures generated and used in the life cycle processes include:
|
Integration Procedure
Verification Procedure Transition Procedure Validation Procedure Maintenance Procedure Disposal Procedure |
|
Reports generated from the life cycle processes include:
|
Integration Report
Verification Report Transition Report Validation Report Operation Report Maintenance Report Disposal Report Decision Report Risk Report Configuration Management Report Information Management Report Measurement Report Acquisition Report Supply Report Infrastructure Management Report |
|
Specialty Engineering Activities
|
Design for Acquisition Logistics – Integrated Logistics Support
Cost‐Effectiveness Analysis Electromagnetic Compatibility Analysis Environmental Impact Analysis Interoperability Analysis Life‐Cycle Cost Analysis Manufacturing and Producibility Analysis Mass Properties Engineering Analysis Safety & Health Hazard Analysis Sustainment Engineering Analysis Training Needs Analysis Usability Analysis/Human Systems Integration Value Engineering |
|
“‐ilities” Analysis Methods
|
Failure Modes Effects and Criticality Analysis
Level of Repair Analysis Logistic Support Analysis/Supportability Analysis Reliability Centered Maintenance Analysis Survivability Analysis System Security Analysis |