Skip to content
Converge.AI
Case Studies

Outcomes the boardroom can underwrite.

Each case is anonymised, but the architectures and metrics are real. We do not publish projects we cannot defend in technical due diligence.

01 / 3National Retailer

Unifying 14 years of merchandising knowledge into a single retrieval engine

The Context

A national retailer with 1,200+ stores, four ERPs from acquisitions, and a 14-year archive of merchandising and supplier policy documents fragmented across SharePoint, Confluence, and shared drives.

The Technical Debt

Category buyers spent 6–9 hours per week locating policy precedents, supplier terms, and historical merchandising decisions. Off-the-shelf "AI search" tools failed evaluation because they could not cite, hallucinated supplier pricing, and ignored regional variations.

The Architecture

  • Document-aware ingestion with table-preserving chunking across 1.2M documents
  • Hybrid retrieval: BM25 + dense embeddings + cross-encoder reranking, tuned per content class
  • Entitlement inheritance from existing Active Directory entitlements — buyers see only their categories
  • Citation-grounded answers with paragraph-level provenance and confidence scoring
  • Continuous evaluation harness with a 1,400-question golden set covering edge cases
02 / 3Tier 1 Financial Services

A governed semantic layer for a bank with 38 reporting silos

The Context

A regional Tier 1 bank with 38 reporting domains across retail, commercial, treasury, and risk — each with its own definition of "active customer", "exposure", and "revenue." Regulatory reporting cycles required eight days of reconciliation.

The Technical Debt

The board could not get a single, defensible answer to questions like "what is our current exposure to sector X?" Each reporting team produced a different number. The bank had spent four years and over $40M with two global SIs trying to fix it, and the situation had worsened.

The Architecture

  • Canonical entity model (customer, exposure, transaction, product) ratified by the executive risk committee
  • Versioned semantic layer with contract-tested metric definitions deployed via dbt + SQLMesh
  • OpenLineage-based data lineage exposed to auditors and regulators on demand
  • Self-serve metric catalog with governed access — every metric has an owner, definition, and SLA
  • AI-readiness instrumentation: every governed entity is automatically embedded for downstream RAG
03 / 3Specialty Insurer

Board-grade LLM governance for a regulated insurer

The Context

A specialty insurer wanted to deploy LLM copilots across underwriting, claims, and customer service — but legal and the regulator required full traceability, redaction, and residency before a single prompt left the building.

The Technical Debt

The insurer had 11 in-flight Generative AI experiments running on direct API keys with no central observability. The Chief Risk Officer froze all initiatives until governance was in place. Two prior vendors proposed 18-month, $14M programmes.

The Architecture

  • AI Gateway: a single, policy-enforced ingress for every LLM call across the enterprise
  • Per-tenant residency routing — local data stays local, with provable controls
  • Centralised redaction (PII, PHI, account numbers) applied before egress, with quarantine audit
  • Identity-bound prompt and response logging with cost attribution to team and use case
  • Model registry with evaluation gates and a documented appeal path for rejected outputs