Best Practices

This page provides general best practices for SysML v2 modeling and reliability analysis with Derisker. These practices apply across all workflows and standards.

Iterative Development

Work Incrementally

Start top-down:

  1. Begin with high-level system elements

  2. Add detail progressively as analysis deepens

  3. Avoid over-modeling early - focus on critical paths first

Parse frequently:

  • Parse after completing each logical section (e.g., one subsystem)

  • Catch errors early before they propagate

  • Verify computed values match expectations as you go

Use bidirectional sync:

  • Model structure and relationships in SysML

  • Refine details and documentation in Derisker table view

  • Changes sync back to SysML files - leverage both tools

Validate Continuously

  • Review parser output after each significant change

  • Check for warnings and validation messages

  • Fix issues immediately rather than accumulating technical debt

Documentation

Comment Effectively

Write for your future self:

  • Explain why decisions were made, not just what was modeled

  • Document assumptions and constraints

  • Note deviations from standards or typical patterns

Be specific:

// Good: Explains rationale with data
attribute :>> failureCauses_Comments =
    "Solder joint fatigue due to thermal cycling (50+ cycles per day),
     manufacturing defect in PCB via (0.1% defect rate per supplier data),
     ESD damage during handling (mitigation: wrist straps required per IPC-A-610)";

// Bad: Vague and unhelpful
attribute :>> failureCauses_Comments = "Various causes";

Include references:

attribute :>> documentation =
    "Reliability Prediction Report RPR-2025-03 Section 4.2;
     Thermal Analysis TAR-2025-01;
     Supplier Quality Data SQD-BAT-2024";

Maintain Traceability

  • Link failure modes to requirements

  • Reference design documents

  • Track which analyses informed ratings

  • Document review history and sign-offs

Model Organization

Use Consistent Structure

Package organization:

  • Separate concerns: architecture, analysis, customizations

  • Use descriptive package names

  • Follow a consistent hierarchy across projects

Naming conventions:

  • Use clear, descriptive names

  • Follow a project naming standard

  • Avoid abbreviations unless they’re universally understood

  • Be consistent with pluralization

Clear Element Identification

Parts and actions:

  • Name parts after their function or role, not implementation details

  • Use action names that describe what the action does

  • Avoid generic names like “component1”, “function2”

Qualified names:

  • Keep qualified name paths reasonable in length

  • Use packages to avoid deeply nested structures

  • Consider readability when referencing elements

Modular Design

  • Keep related elements together

  • Use imports strategically to manage dependencies

  • Avoid circular dependencies between packages

  • Design for reusability across analyses

Version Control

Use Source Control

Essential practices:

  • Keep all SysML files under version control (Git, SVN, etc.)

  • Commit frequently with meaningful messages

  • Tag releases and analysis milestones

  • Use branches for exploratory changes

Good commit messages:

# Good
"Add battery failure modes with thermal stress analysis data"
"Update severity ratings per safety review feedback"
"Fix causal link cycles in power subsystem"

# Bad
"updates"
"fix"
"changes"

Branch Strategy

  • Use main branch for approved analyses

  • Create feature branches for new subsystems or major changes

  • Use pull requests for team review before merging

  • Tag analysis versions that were delivered to stakeholders

Avoid Conflicts

  • Coordinate with team on who’s working on which files

  • Keep working branches short-lived

  • Pull frequently to stay up to date

  • Resolve conflicts promptly

Team Collaboration

Establish Conventions

  • Define team-wide naming conventions

  • Agree on package structure patterns

  • Document your team’s workflow

  • Create templates for common analysis types

Code Reviews

  • Review significant model changes as a team

  • Check for consistency with established patterns

  • Verify traceability and documentation quality

  • Confirm ratings and calculations are justified

Knowledge Sharing

  • Document lessons learned from each analysis

  • Share reusable patterns and library extensions

  • Maintain a team knowledge base

  • Conduct regular design/analysis reviews

Model Quality Checklist

Before Finalizing Analysis

All required attributes filled in

No TBD values remaining (unless intentionally deferred)

All relationships (causal links, references) validated

No parser errors or warnings

No cycles in failure chains

Comments explain key decisions and assumptions

References to source documents included

Model builds and parses successfully

Results exported and reviewed

Peer review completed

Version tagged in source control

Before Delivery

Cover sheet metadata complete and accurate

Stakeholder review completed

Required approvals obtained

Export formats generated (CSV, reports, visualizations)

Traceability to requirements verified

Analysis assumptions documented

Known limitations or exclusions noted

Final version tagged and archived

Performance Tips

Keep Models Manageable

  • Break very large analyses into multiple files

  • Use imports rather than copying content

  • Consider separate analyses for distinct subsystems

  • Archive historical analyses rather than keeping all in one model

Optimize Parsing

  • Parse incrementally during development

  • Don’t wait until the end to validate

  • Use specific qualified names rather than parsing entire workspace

  • Clear parser cache if experiencing issues

Common Pitfalls

Avoid These Mistakes

Over-modeling:

  • Don’t model implementation details not relevant to analysis

  • Focus on failure modes that matter to reliability/safety

  • Avoid “analysis paralysis” - deliver useful results, then iterate

Under-documenting:

  • Every rating should have justification

  • Every assumption should be documented

  • Every deviation should be explained

Ignoring warnings:

  • Parser warnings usually indicate real issues

  • “It parses” doesn’t mean “it’s correct”

  • Investigate and resolve warnings promptly

Inconsistent updates:

  • Keep SysML model and exported results synchronized

  • Re-parse after making changes

  • Export fresh results before stakeholder reviews

Poor traceability:

  • Link everything to requirements

  • Reference source documents

  • Maintain clear derivation paths for computed values