Shared AI surface across the DevOps catalog
A single Aviator license plugs into Software Delivery Management, Functional Testing, Application Quality Management, and Performance Engineering. Customers do not buy one AI for each product.
OpenText • Application lifecycle management
DevOps Aviator is the generative AI layer that rides on top of the OpenText DevOps catalog. Merito delivers the rollout that turns Aviator suggestions into evidence teams actually trust.
Merito sells OpenText DevOps Aviator and delivers AI rollout across the DevOps catalog and test generation, defect clustering, risk scoring, and review-ready release evidence that release managers sign.
What it is
OpenText DevOps Aviator is a generative AI surface shared across the DevOps catalog. It reads from Software Delivery Management, Application Quality Management, Functional Testing, and Performance Engineering products and produces outputs that a release manager can act on and test cases generated from requirements, defects clustered by root cause, risk scores on release candidates, and narrative evidence suitable for governance meetings.
Aviator is not a standalone product in the sense that it runs tests or manages releases on its own. It runs inside the catalog. The license is priced and sold alongside the DevOps product it augments. Customers who own Software Delivery Management add Aviator to get AI-generated release narratives; customers who own Application Quality Management add Aviator to get test-case generation grounded in their own requirements.
Merito treats Aviator as a rollout problem, not a toggle. Getting useful AI output requires clean requirements, clean defect history, and an honest baseline of manual effort so improvements are measurable. Merito's engagement starts with the data layer and what history are you feeding Aviator, what outputs do you want from it, and which release ceremonies will adopt those outputs first.
Ideal use cases
What it is best at
A single Aviator license plugs into Software Delivery Management, Functional Testing, Application Quality Management, and Performance Engineering. Customers do not buy one AI for each product.
Aviator reads from the customer's own requirements, test history, defect backlog, and release outcomes. The suggestions are specific to the program's history, not generic LLM output.
The release-narrative and risk-score outputs are designed to land in governance ceremonies. Release managers can accept, edit, or reject them inline; AI does not publish evidence on its own.
Core capabilities
AI-drafted test cases grounded in the customer's own requirements, stories, and existing test assets.
Requirement-to-test-case drafting
Aviator reads requirements in Application Quality Management or Software Delivery Management and drafts test cases ready for human review.
Story-to-acceptance drafting
Aviator takes agile stories and generates candidate acceptance tests aligned with the team's past patterns.
Gap analysis against existing tests
Aviator compares requirement scope against existing test coverage and proposes the missing tests.
Cluster, prioritize, and explain defects across release cycles using the program's own history.
Defect clustering
Aviator groups defects by likely root cause so triage teams see five clusters instead of five hundred tickets.
Defect prioritization
Severity and customer-impact scoring informed by the program's historical closure patterns.
Narrative summaries
Natural-language descriptions of defect clusters written for engineering leads and release managers.
AI-generated release readiness evidence that governance bodies can review and sign.
Release risk scoring
A score for each release candidate informed by change scope, test coverage, defect trend, and past-release outcomes.
Release narrative generation
A written summary of what is in the release, what was tested, and what residual risk remains. Tuned for governance meetings.
Trend analysis
Cycle-over-cycle analysis of delivery health trends, surfaced as both dashboards and narrative copy.
Where it fits in the stack
Deployment and implementation
Licensing and packaging
Aviator for Software Delivery Management
Release narratives, risk scoring, trend analysis inside SDM.
Best for: Programs using SDM for delivery governance.
Aviator for Application Quality Management
Test generation, gap analysis, defect clustering inside AQM.
Best for: Regulated QA programs needing AI-assisted authoring.
Aviator for Functional Testing
Test step suggestions, defect categorization inside Functional Testing.
Best for: Automation teams consolidating on OpenText Functional Testing.
Aviator for Performance Engineering
Scenario drafting and result interpretation inside LoadRunner editions.
Best for: Performance CoEs wanting faster scenario authoring.
Merito services
Merito sells licenses and the delivery work around them. Pick the service that matches where you are in the lifecycle.
Data-layer preparation, first-ceremony adoption, and measurement of AI-assisted improvement.
Explore service02Aviator output wired into release ceremonies and CI/CD governance meetings.
Explore service03Readiness assessment for customers considering Aviator across the DevOps catalog.
Explore service04Named engineer, priority SLAs, and release-time coverage for Aviator in production.
Explore service05Long-term run support for Aviator in high-volume programs.
Explore service06Author and reviewer training for teams consuming Aviator artifacts.
Explore service07Merito-placed Aviator engineers embedded with your release and QA teams.
Explore serviceDevOps Aviator licensing
Merito sells OpenText DevOps Aviator and delivers the rollout that turns AI drafts into evidence your governance bodies actually sign.
Merito point of view
Merito has seen customers buy Aviator, generate thousands of AI artifacts in the first month, and have zero of them land in a real release ceremony. That is an adoption failure, not a product failure. Aviator generates fine; the question is whether a governance body will accept AI-drafted evidence, and that question is not answered by the license.
Merito recommends picking one ceremony (release readiness review, defect triage, or test authoring) and adopting Aviator there first. Measure the before and after. If the AI draft saves the team time and the quality holds, expand. If it does not, the tuning work is the bottleneck, not the spend.
Aviator is priced per underlying DevOps product. Merito's advice and start with Aviator for the product that has the cleanest data. Software Delivery Management often leads because release data is easier to clean than test data. Application Quality Management leads in regulated environments because requirement quality is already a habit.
What buyers usually underestimate
Related from Merito
Related solutions
Related services
Related products
Frequently Asked Questions
Consultation request
Share which DevOps catalog product you own and which ceremony you want Aviator to land in first. A Merito Aviator specialist follows up within one business day.
One ceremony first
Merito picks one ceremony, baselines manual effort, and expands Aviator only when the numbers move.
Human-in-the-loop by design
No Aviator artifact ships without human review. Merito enforces that in every rollout.
Next step
A Merito Aviator engagement picks one ceremony (release readiness, defect triage, or test authoring), measures the baseline, and adopts Aviator there first.