Abstract—Artificial intelligence is transforming software testing by scaling up test data generation and analysis, creating new possibilities, but also introducing new challenges. One of the common problems with large-scale test data is the lack of traceability between test scenarios and system requirements. The paper addresses this challenge by proposing a traceability solution tailored to an industrial setting employing a data-driven approach. Building on an existing model-based testing framework, the design extends its annotation capabilities through a multilayer taxonomy. The suggested architecture leverages AI techniques for bidirectional mapping: linking requirements to test scripts for coverage analysis and tracing test scripts back to requirements to understand the tested functionality.
Index Terms—test coverage, test report, natural language processing, requirements traceability
© 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.