Engine v4.5 · H5 · SAL · SAL-X · ASC · EPR · AMB-001 · March 2026

The Question No Organization Can Answer About Its AI System

Not with a policy. Not with a disclosure. Not with alignment documentation.

HARS audits deployed AI systems for a specific class of violation: interpretive authority claims. The result is a verifiable audit artifact — runtime-bound, timestamped, and invalidated the moment the system changes.

The Problem

The Gap in AI Governance

Every major AI governance framework describes how a system is supposed to behave. None of them provide a verifiable mechanism to prove what a system is structurally incapable of doing.

That gap is still open.

When AI systems are deployed in healthcare, employment, education, or criminal justice, the question that arrives — in procurement, in audit, in litigation — is not whether a policy was in place. It is whether incapability can be proven.

Policy describes intent. HARS produces proof.

The Solution

What HARS Audits

HARS audits the deployed system — not a model version, not a vendor claim. It scans AI-authored output against explicit gate logic that detects interpretive authority violations: cases where the system diagnoses, evaluates, prognoses, or assigns meaning to a human subject.

Bound to the actual runtime — certification invalidates when the system changes

Gate-level violation classification — diagnostic, evaluative, prognostic, identity attribution

Verifiable audit artifact — suitable for regulatory review, procurement, and legal discovery

The HARS Seal: a timestamped, runtime-bound proof record — not a badge

H5 Upgrade

H5: The Quoted Term Count Guard

HARS has introduced H5 — a pre-scan attribution-layer transformer that improves audit fidelity by distinguishing quoted content from system-authored framing.

AI systems operating in clinical and legal contexts frequently quote interpretive language from source materials: physician's notes, legal standards, HR assessments. Without H5, that quoted content can trigger gate violations even though the system is not asserting interpretive authority of its own. H5 resolves this class of false positive before any gate runs.

H5 does not change gates. It changes attribution.

Same-Length Substitution

Quoted interiors replaced with equal-length spaces — preserves positional integrity for all downstream processing.

Zero-Overhead Pass-Through

When no quote delimiters exist, H5 adds no processing overhead and returns the line unchanged.

Apostrophe Guard

Contractions and possessives do not trigger false span detection. Forms like "doesn't" are handled correctly.

Audit Fidelity Restoration

When a real violation is found, the original unmodified line is restored into the audit record for full auditor review.

Validation Results

Validated. Documented. Tested.

Suite A — Unit Tests

0 failures across 8 tests. Confirmed behaviors include pure-quote suppression, mixed-content preservation, no-quote pass-through, unclosed quote neutralization, transcript-style multi-span handling, apostrophe guard correctness, Python string literal treatment, and empty span counting.

Suite B — Mock Engine Comparison

Five mock files representing distinct attribution scenarios. H5 correctly suppressed quoted-span false positives, preserved all real violations in unquoted text, and produced zero effect where no quote delimiters were present.

Test FilePre-H5H5Interpretation
mock_pure_quoted.py60Quoted-span false positives fully suppressed
mock_mixed.py32Real violation preserved; quoted suppressed
mock_no_quotes.py22Zero effect — no delimiters present
mock_malformed_quote.py10Unclosed quote neutralized correctly
mock_transcript.py31Multi-span: only unquoted violation fires

Suite C — Architectural Boundary Condition

Pre-H5: 97 violations across 5 real-world test fixture files. H5: 0 violations. This result reflects H5 correctly treating Python string literals in source fixture files as quoted spans. This is expected and documented behavior. Production AI output — rendered prose, JSON values, user-facing text — does not share this structure. Suite B models production deployment behavior.

v4.5 Post-Scan Refinement

SAL: The Subject-Awareness Layer

H5 resolves attribution before gates run. SAL runs after. The Subject-Awareness Layer applies ten post-lexical rules to each gate finding, determining whether the flagged term is structurally directed at a human subject — or whether context rules it out.

SAL evaluates each finding against four analytical components: Interpretive Target Resolution (ITR), Modal Phrase Check (MPC), Proper Noun Sentinel (PNS), and Subject-Linkage Chain (SLC). A finding downgraded by SAL is reclassified — not silently dropped. The audit record preserves both the original finding and the downgrade rule that fired.

Four gate classes are immune to SAL downgrade: identity assignment, conclusion-layer claims, tooltip violations, and badge mismatches are NON_DOWNGRADE_GATES — they cannot be reclassified regardless of context.

SAL-01 through SAL-10

Ten rules executed in sequence. Each addresses a distinct class of contextual misattribution.

Downgrade Audit Trail

Every reclassification is logged with the specific SAL rule that fired. Audit transparency is fully preserved.

Non-Downgrade Immunity

Four gate classes are permanently exempt from SAL refinement. Identity assignment violations always remain RED.

Zero Information Loss

No finding is dropped. All are preserved in the receipt with their original severity and any amended classification.

Speech-Act Extension

SAL-X: Identity and Authority Assertions

SAL addresses what a system says about a human subject. SAL-X addresses how the system speaks — specifically, whether it performs identity assignment, role-taking, or authority claims directed at or on behalf of a human.

SAL-X-01 through SAL-X-07 detect a class of violation that lexicon gates alone cannot surface: cases where the system declares itself to occupy a human role ("You are a medical transcription editor"), asserts interpretive authority over a human domain, or performs a speech-act that positions the system as a human-equivalent agent. These patterns are common in clinical, legal, and HR deployment contexts.

SAL-X does not operate on prohibited term lists. It operates on structural speech patterns — the grammar of authority assertion.

SAL-X-01 through SAL-X-07

Seven rules targeting identity assignment, role-taking, and authority assertion patterns.

Clinical Deployment Coverage

System-prompt identity assignment — "You are a [role]" constructs — is a primary target. Common in medical and legal AI tooling.

Pattern-Based, Not Lexical

SAL-X operates on structural speech patterns. It catches violations that contain no prohibited terms.

Composable with SAL

Both layers run in sequence. SAL-X findings enter the same downgrade audit trail as SAL findings.

HARS-EPR-001

Execution Provenance Record

Every v4.5 audit receipt includes an Execution Provenance Record — a structured JSON block that documents the exact conditions under which the audit was executed. The EPR is not a summary. It is a machine-readable, cryptographically sealed record of the audit execution itself.

The EPR captures: engine version, operator identity, target path, files discovered and scanned, files excluded and why, SAL and SAL-X downgrade counts, gate execution summary per error code, and the final verdict. It is sealed with a record_hash — a SHA-256 of the canonical JSON, computed before the hash field itself is written. Any post-issuance modification is cryptographically detectable.

The EPR binds the audit artifact to the audit execution. A receipt without a valid EPR cannot be considered a HARS v4.5 audit record.

Scope

What HARS Documents — Including Its Limits

An audit system that acknowledges its boundaries is more credible than one that does not. H5's documented limitations are explicit:

Unquoted narrative attribution is not currently resolved by H5

Unicode smart quotes are not yet included in span detection

Multiline block quotes not joined by the preprocessor are processed as individual lines

These limitations define H5's scope. Each represents a documented engineering decision point and an active area of audit methodology development.

Audit

Begin With Proof

If your AI system is deployed in healthcare, employment, education, or legal contexts, the question of what it cannot do is coming. The variable is only whether you answer it first.

Audit credibility depends not only on catching violations, but on correctly attributing them.

Request a HARS Audit →The Standard →