AML Open Framework · 2026

An AML program your regulator can replay.

One Compliance Manifest defines the program. The engine generates SQL, dashboards, MRM dossiers, and the examiner pack from it. An immutable hash-chained audit ledger lets any historical run replay byte-for-byte. Apache 2.0, runs in your perimeter, no per-seat licence.

10example specs 8curated typologies 32dashboard pages 38CLI commands 2,000+tests · all green 5jurisdictions Apache 2.0licence
The data layer underneath

Connect once. Validate forever.

AML's binding constraint isn't detection — it's getting one clean view across core banking, payment rails, KYC, sanctions, and fraud. The framework ships 9 connectors (CSV / Parquet / DuckDB / Snowflake / BigQuery / S3 / GCS / synthetic / ISO 20022 native), data-contract enforcement with per-attribute freshness pinning, and a DATA-N → artifact map linking each of the 11 whitepaper data pains to the page / CLI / module that closes it.

9connectors out of the box ISO 20022pacs.008 · pacs.009 · pacs.004 · pain.001 11data pains, one surface Fail-closedcontract validation

See it live: Data Problem whitepaper · Data Integration page · Architecture · Data integration.

The walk-back regulators ask for

Trace every alert. Down to the row.

When an examiner asks "show me why this alert fired", the answer is one paste box away. The framework stamps the source file path, schema hash, rendered SQL, rule version, and the exact matched source rowids on every alert. walk_lineage(case_id) returns the chain; the new Lineage Explorer page renders it: source → DuckDB table → query → matched rows → alert → case → STR. Every link is hash-chained so a re-run with the same spec and data produces an identical chain.

7-linkchain, source row → STR Hash-chainedtamper-evident Reproduciblesame spec + data → same hashes JSON exportfor SAR attachment

See it live: Lineage walk-back deep-dive · Lineage Explorer page · Audit & Evidence reference.

Pick your deck.

Same framework, two faithful narrations — a McKinsey-style board pack for executives, and a dual-pane architecture deck for engineering and 2LoD. Each side has a live HTML deck, a narrated video, and a printable PDF. Click any artifact — it opens in the viewer with the top nav still available, so switching sides is one click.

For executives

Board briefing

CCO · MLRO · Audit Committee · CRO · CFO

McKinsey-style board pack. Action titles, exhibit numbers, primary-source citations on every page. White-paper aesthetic — print-ready, email-able.

Open the board deck →
For engineering & 2LoD

Technical deck

Head of Eng · CTO · 2LoD validation · Internal Audit

Dual-pane architecture deck. CCO question on the left, real CLI output on the right. 7-act narrative covering audit chain, backtester, multi-jurisdiction.

Open the technical deck →

The problem regulators actually find.

Across recent enforcement orders, regulators rarely allege the bank missed a typology. They allege the bank cannot evidence what it did. Process and governance gaps outnumber data and model gaps roughly 2:1 in the consent orders surveyed.

FCA Dear-CEO Letter · Mar 2024
"Decisions made in relation to financial crime were not supported by evidence or an audit trail of debate and challenge."
UK retail banks + Annex 1 firms · still operative in 2026 · PAIN-1 in research
FinCEN TD Bank consent order · Oct 2024 · $3.09B
"Trillions of dollars in transactions annually [went] unmonitored." The detection queue was in "red status" in board reporting for years.
Largest BSA enforcement of 2024 · framing case for 2025-26 · case study
LexisNexis True Cost · Feb 2024
"Annual cost of financial crime compliance totals $61 billion in the United States and Canada." 57% labour, 40% tech, 3% other.
FinCEN Sep 2025 RFI framed it: "is the juice worth the squeeze?" · PAIN-8 in research

What we built — one Manifest, four layers.

Policy, data contracts, detection rules, case workflow and regulator mapping live in one versioned document — the Compliance Manifest (an aml.yaml file, for engineers). Every runtime artifact is generated from it; an immutable hash-chained audit ledger records every decision so any historical run can replay byte-for-byte. This is what kills the drift that causes AML enforcement actions.

Authored by
CCO / MLRO — writes the Manifest; signs the decision log.
Operated by
Engineer / 1LoD — runs the generators + the engine.
Operated by
Engineer / 1LoD — manages the alert + case stream.
Verified by
Audit / Regulator — replays history byte-for-byte.
📜 Policy The Compliance Manifest · aml.yaml for engineers · reviewed via PR
programjurisdiction · regulator · owner
data_contractstables · columns · SLAs · PII
ruleslogic + regulation_refs
workflowqueues · SLAs · escalations
reportingSAR · CTR · STR forms
boi · mrmBOI refresh · model risk
⚙️ Generation Manifest → artifacts · deterministic, reproducible
sql_generatorrule SQL · DuckDB · Snowflake
dag_generatorAirflow · Dagster stubs
tests_generatordata-quality + fixtures
docs_generatorpersona-specific markdown
control_matrixauditor-facing mapping
mrm_dossierper-rule MRM bundle
Runtime execute rules · produce alerts + cases
ingestdata contract enforced · ISO 20022 native
rule enginewindowed aggregations · joins · network
case managerqueues · SLAs · reviewer notes
boi workflowbeneficial-owner refresh
tuning labbacktester + threshold sweeps
regulator exportSAR bundle · audit-pack CLI
🔒 Evidence immutable audit ledger · signed manifests · hash-chained
spec_version_hashgit SHA + content hash
input_hashdata snapshot hash
output_hashdeterministic-rerun proof
decision_logreviewer actions + reason
Verified by
2LoD — challenges the Manifest quarterly.
Validated by
MRM — independent re-implementation under SR 26-2.
Reviewed by
2LoD / Audit — sample-checks alerts, reads decision log.
Replayed by
Internal Audit — re-runs any historical execution byte-for-byte.

How it works in five stages.

One Compliance Manifest defines policy. Generation, runtime and audit all flow from it — deterministically, every time.

📜
Stage 01
Compliance Manifest
Compliance writes one spec. Rules cite the regulation that justifies them.
⚙️
Stage 02
Generators
SQL, DAGs, tests, MRM dossiers, control matrix — all built from the Manifest.
Stage 03
Engine
Rules execute on the warehouse. Alerts produced with full evidence chain.
Stage 04
Cases & STRs
SLA-timed queues, auto-drafted narratives, regulator-ready ZIP exports.
🔒
Stage 05
Audit ledger
Every decision SHA-256-chained. Re-run any history byte-for-byte.
same Manifest + same data + same seed = identical output hashes
FCA Mar 2024 · FinCEN Apr 2026 NPRM · SR 26-2 effective Apr 2026

Same Manifest, four sizes.

Lean fintech to Tier-1 bank — different scope, same single-source-of-truth pattern. The framework fits where you are.

Mid-tier bank
Tier-1 bank
FinTech ◆
Scaling fintech
Pilot + 2LoD challenger10–20 compliance · single juris
MRM challenger model100+ FTE · global · SR 26-2
★ Primary platform1 MLRO · pilot in weeks
Cross-border platformsmall team · multi-juris · ISO 20022
FinTech / EMI applicant ★ Primary
One lean team holds the whole program. Demo on synthetic data in minutes; pilot on your data in weeks; defensible program (tuned + 2LoD-reviewed) in months — not the 9-24-month commercial deploy window. Once running, the cure-notice evidence pack is one CLI command and the investor-DD answer is a query, not a consultant engagement.
Scaling fintech / VASP / cross-border Platform
Multi-jurisdiction templates (US/CA/EU/UK), ISO 20022 native, BOI workflow, FATF Travel Rule. Lean team gets enterprise reach without enterprise headcount.
Mid-tier bank Pilot
Runs alongside vendor TM as proof-of-concept. Becomes the 2LoD independent challenger model SR 26-2 expects — without buying a second commercial licence.
Tier-1 bank 2LoD challenger
MRM independent re-implementation under SR 26-2 / OCC 2026-13. Deterministic re-run + hash-chained audit ledger as published guarantee — what no commercial vendor offers.

Examiner-defensible, without the vendor.

Four categories a CCO actually picks between, plotted on what 2026 enforcement scores you on — can you prove it (Y axis) and do you control it (X axis).

Vendor-locked · audit-mature
★ Defensible & independent
Cloud SaaS · SR 26-2 ML tax
Cheap · undefendable
Commercial enterpriseActimize · Oracle · SAS · ~$50M TCO
★ AML Open FrameworkApache 2.0 · deterministic re-run · audit ledger
AI-native SaaSHawk:AI · ComplyAdvantage · Feedzai
Spreadsheets + DIYmanual evidence · no replay
"Examiner-defensible" (Y axis) Y
Can you replay any historical run byte-for-byte and prove what fired, when, and against which regulation? Hash-chained audit ledger + deterministic re-run is a published guarantee — no other platform offers it as a contract.
"You control the program" (X axis) X
Who owns the rule library, the thresholds, and the audit trail? Vendor-controlled means a 9-24 month deploy + per-jurisdiction module fees + extraction friction. You-controlled means the Manifest is yours; the engine runs in your perimeter; the spec is plain text under version control.
Why we land top-right
Apache 2.0 + deterministic re-run + hash-chained audit ledger + 5 jurisdictions with 10 bundled specs. Commercial vendors are audit-mature but vendor-locked; AI-native SaaS carries the SR 26-2 model-validation tax; spreadsheets are cheap but won't survive a regulator. This is the only point on the chart that's both.

Built on.

Honest tech stack. Pip-installable, runs in your perimeter, no cloud dependencies. The dashboard, the REST API and the engine all share one codebase — same Manifest in, same answers out.

Python ≥3.10engine + CLI + generators
Pydantic v2strict spec validation
DuckDBin-process columnar SQL
FastAPIREST + JWT/OIDC + multi-tenant
Streamlit32 dashboard pages
ISO 20022 nativepacs.008/009/004 + Travel Rule
Hash-chained audit ledgerSHA-256 · deterministic re-run
goAML XMLSTR / SAR export · Mermaid networks
Apache 2.0your perimeter, your control

Primary research & source code.

Every claim in the decks above traces to one of these documents in the source repo. The decks are interpretation; this is the evidence.

Research · Style guide

10 daily pain points an AML leader feels — 2026

Each pain anchored in an FCA / FinCEN / OSFI / FINTRAC publication, with the verbatim quote, role affected, cost type, and the framework capability that addresses it.

research/process-pain → opens in viewer
Case study · 2024

The largest BSA enforcement action of 2024 — $3.09B

Five findings — channel coverage, pass-through, shell companies, internal-alert failures, SAR delays — each mapped clause-by-clause to a specific Manifest entry in the framework.

research/td-2024 → opens in viewer
Research · FinTech reality

FinTech AML reality — sponsor-bank pressure + AMLA scope + 2024-26 enforcement

The companion to the 10-pain Tier-1 style guide, written for the FinTech / EMI / MSB / VASP MLRO. 8 realities anchored in 2024-26 enforcement (Block / Cash App $80M, Starling £29M, Coinbase $100M, Robinhood Crypto $30M) plus sponsor-bank cure-notice dynamics post-Synapse / Evolve.

research/fintech → opens in viewer
Research · Landscape

Competitive positioning · 2026

Where the framework slots in against veteran rules+ML platforms, AI-native challengers, graph specialists, OSS alternatives — plus the buyer-archetype matrix mapping the four landing personas to the wins they actually care about.

research/competitive → opens in viewer
Research · 90-day regulator pulse

What's moved in the last 90 days — Feb-Apr 2026

30 events across 10 regulators / 8 jurisdictions. SR 26-2 effective 2026-04-17, FinCEN AML Effectiveness NPRM, AMLA RTS consultations, FATF grey-listing of Kuwait + PNG, EU's 20th Russia sanctions package. Every entry primary-source-only.

research/regulator-pulse → opens in viewer
Research · Whitepaper

Data is the AML problem — 11 faces, primary-source only

The binding constraint underneath audit-defensibility is data — completeness, accuracy, lineage, sovereignty. Anchored in BCBS 239, FinCEN's April 2026 NPRM, OSFI E-23 / B-13, FCA Annex 1, and the largest 2024 enforcement orders (TD, RBC, HSBC's $670B unmonitored).

research/data-problem → opens in viewer
Research · Walk-back

Trace every alert. Down to the row.

The 7-link evidence chain regulators ask for and most vendors can't produce: source file path → schema hash → DuckDB table → rendered SQL → matched source rowids → alert → case → STR. Every link hash-stamped, reproducible, and downloadable as JSON. 12 fields stamped on every decision; walk_lineage(case_id) returns the chain.

research/lineage → opens in viewer
Framework · Dashboard surface

Data Integration — 9 connectors, ISO 20022 native, contract-validated

The operator-facing answer to "what data is flowing through this AML program?". Source catalogue, contract roll-up in whitepaper vocabulary (completeness / staleness / checks), ISO 20022 message-type counts, and a DATA-N → artifact map linking each of the 11 whitepaper data pains to the page / CLI / module that closes it.

docs/dashboard-tour.md#data-integration → GitHub
Source · GitHub

github.com/tomqwu/aml_open_framework

10 example specs · 38 CLI commands · 1,980+ tests · 32 dashboard pages · Apache 2.0. Runs entirely in your perimeter.

git clone … && pip install -e ".[dev,dashboard,api]"
Architecture · Design rationale

One Manifest. Four layers. No drift.

The Compliance Manifest pattern explained: Policy → Generation → Runtime → Evidence. Persona arrows show who authors and who verifies each layer.

research/architecture → opens in viewer
Archive

V1 deck (Mar 2026 · preserved)

The first iteration of the pitch deck — 27 slides, mixed audience, before the 2026 narrative refresh. Kept here so anyone with a bookmarked link still finds it.

/v1-archive/