This OpenText ALM release focuses on strengthening enterprise test management fundamentals rather than introducing surface-level features. It delivers meaningful improvements for regulated SDLC environments where auditability, traceability, execution control, and operational efficiency directly affect delivery risk.
For Quality Engineering, DevOps, and DevSecOps leaders, these enhancements reduce governance gaps, improve release confidence, and streamline day-to-day testing operations. Merito helps enterprises translate these platform capabilities into practical, governed workflows that scale across programs and regions.
AI Test Creation with Structured Design Steps in Aviator
OpenText ALM Aviator now generates AI-created tests as structured design steps instead of unstructured text. Each step is saved as a discrete, editable entity in the test design.
This change makes AI-generated tests enterprise-ready. Structured steps improve traceability to requirements, enable step-level coverage analysis, and strengthen audit reviews. Compliance teams gain visibility into exactly what the AI generated and how it aligns with controls and acceptance criteria.
For testers and test designers, this significantly reduces cleanup time. Teams can edit or align individual steps with enterprise naming standards, reuse them across test libraries, and quickly prepare regression coverage under release pressure.
Version Control for Requirements & Tests in OpenText ALM Web Client
Version control is now available for requirements and tests directly within the OpenText ALM Web Client. Teams can maintain multiple versions with full historical tracking.
This strengthens enterprise governance by eliminating silent drift between requirements and tests. Audit and risk teams can see which versions were used to validate specific releases, while change advisory boards gain clarity into how assets evolved over time.




