From Sample-Based Assessments to Full Inspection of Systems Engineering Artifacts
Software development in regulated domains demands a rigorous systems engineering approach based on the creation of supporting engineering artifacts. The most prominent artifacts are requirements and test specifications, which must cover the entire product scope. The quality of these artifacts constitutes a critical prerequisite for product quality. Current state-of-the-art quality assurance practices for these artifacts typically rely on assessing a very limited number of samples with the results assumed to be representative of the whole. This contribution demonstrates how automated checks using large language models (LLM) can enable full quality coverage for engineering artifacts. We outline the benefits of this approach and discuss the inherent limitations and challenges.
Value for the audience:
- The audience will learn how integrating LLMs into “docs-as-code” workflows delivers scalable, traceable, and cost-efficient quality control.
- Attendees will learn about the opportunities and challenges of AI-assisted quality assurance.
Problems addressed:
Human performance is inherently variable. Even if certain individuals within an organization consistently produce high-quality artifacts, this cannot be assumed for all contributors — or even for the same individuals over time.
Engineering artifacts must be reviewed before approval, but this process is time-consuming. Experience shows that review time is limited and tends to focus on content rather than formal quality aspects.
Review evidence is typically lost. Even when reviews are performed thoroughly, little or no traceability remains. Documenting the review process in detail would significantly increase effort.
Talk language: English
Level: Advanced
Target group: Senior developers, software quality experts
Company:
Robert Bosch GmbH
Dr. Jochen Schmähling