Institutional Use Only

Institutional Implementation Guide

Full Matrices, Schemas, Audit Mappings, Export Packages

Version 1.0
Effective Date December 29, 2025
Classification Institutional / Deployment Teams
Issuing Entity TeraSystemsAI

Non-Negotiable Constraint

This guide may not be interpreted or implemented in any way that: grants authority to AI systems, allows automated final decisions, obscures accountability ownership, or replaces human judgment. Any implementation that violates this constraint is non-compliant.

1. Purpose of This Guide

What This Guide Does

What This Guide Does Not Do

Explicit Limitations

This guide does not certify ethical behavior. It does not guarantee correctness of AI outputs. It does not eliminate institutional liability. It does not replace legal counsel review. Institutions remain fully responsible for deployment decisions.

2. Responsibility Matrix - Full Specification

The Responsibility Matrix is a mandatory system artifact. Every AI-assisted outcome must produce exactly one matrix record. No matrix record means no valid output.

Required Fields

Field Type Required Description
matrix_id UUID Yes Unique identifier for this responsibility record
outcome_id UUID Yes Reference to the AI-assisted outcome
ai_system_id String Yes Identifier and version of AI system
ai_recommendation Object Yes AI output with confidence and uncertainty
human_reviewer_id String Yes Authenticated identity of human reviewer
human_decision Enum Yes APPROVED | REJECTED | MODIFIED
human_rationale String Conditional Required if REJECTED or MODIFIED
policy_version String Yes Active policy version at decision time
escalation_triggered Boolean Yes Whether escalation occurred
escalation_reason String Conditional Required if escalation_triggered = true
institution_id String Yes Accountable institution identifier
created_at ISO 8601 Yes Immutable creation timestamp
checksum SHA-256 Yes Integrity verification hash

Conceptual Schema

ResponsibilityMatrix Schema

ResponsibilityMatrix {
  matrix_id:           UUID [PRIMARY KEY, IMMUTABLE]
  outcome_id:          UUID [FOREIGN KEY, NOT NULL]
  
  // AI Contribution
  ai_system_id:        STRING [NOT NULL]
  ai_recommendation:   JSON {
    output:            ANY
    confidence:        FLOAT [0.0-1.0]
    uncertainty:       FLOAT [0.0-1.0]
    model_version:     STRING
  }
  
  // Human Review
  human_reviewer_id:   STRING [NOT NULL, AUTHENTICATED]
  human_decision:      ENUM [APPROVED, REJECTED, MODIFIED]
  human_rationale:     STRING [REQUIRED IF decision != APPROVED]
  review_timestamp:    TIMESTAMP [NOT NULL]
  
  // Policy Constraints
  policy_version:      STRING [NOT NULL]
  policy_rules_applied: ARRAY[STRING]
  
  // Escalation State
  escalation_triggered: BOOLEAN [NOT NULL]
  escalation_reason:    STRING [REQUIRED IF triggered]
  escalation_timestamp: TIMESTAMP [NULLABLE]
  
  // Institutional Ownership
  institution_id:      STRING [NOT NULL]
  department_id:       STRING [NULLABLE]
  
  // Audit Fields
  created_at:          TIMESTAMP [IMMUTABLE]
  checksum:            SHA256 [COMPUTED]
}

Immutability Requirement

Matrix records must be append-only. No UPDATE or DELETE operations are permitted after creation. Corrections require new matrix records with explicit references to superseded records.

Storage & Retention

3. Role Enforcement Schemas

Each role has explicit permissions, prohibitions, and attestation requirements. These are governance constraints, not application features.

AI System Role
Permitted Actions
  • Generate recommendations
  • Compute confidence scores
  • Quantify uncertainty
  • Flag escalation conditions
  • Log outputs to matrix
Prohibited Actions
  • Mark output as "final"
  • Execute decisions
  • Bypass human review
  • Modify policy rules
  • Accept responsibility
Required Attestations
  • Output is recommendation only
  • Confidence bounds are valid
  • Escalation rules evaluated
  • Policy version recorded
Human Reviewer Role
Permitted Actions
  • Review AI recommendations
  • Approve, reject, or modify
  • Override AI output
  • Request additional review
  • Document rationale
Prohibited Actions
  • Delegate to AI system
  • Approve without review
  • Bypass escalation
  • Modify audit records
  • Transfer accountability
Required Attestations
  • Review was performed
  • AI output was evaluated
  • Decision is independent
  • Accountability accepted
Policy Configuration Role
Permitted Actions
  • Define thresholds
  • Set escalation triggers
  • Configure constraints
  • Version policy changes
  • Audit policy history
Prohibited Actions
  • Disable escalation
  • Remove human review
  • Grant AI authority
  • Delete policy history
  • Bypass approval flow
Required Attestations
  • Changes are authorized
  • Version is incremented
  • Invariant preserved
  • Audit trail updated
Institution Role
Permitted Actions
  • Accept liability
  • Enforce governance
  • Authorize deployments
  • Review audit records
  • Respond to regulators
Prohibited Actions
  • Transfer accountability to AI
  • Claim AI decided
  • Obscure responsibility
  • Disable audit logging
  • Modify historical records
Required Attestations
  • Framework compliance
  • Audit readiness
  • Liability acknowledged
  • Governance enforced

Enforcement Rules

Role Enforcement Logic

RULE: AI_OUTPUT_NEVER_FINAL
  IF output.status == "FINAL" AND output.source == "AI"
  THEN REJECT with "AI outputs cannot be marked final"

RULE: HUMAN_DECISION_REQUIRED
  IF matrix.human_decision IS NULL
  THEN BLOCK output propagation

RULE: POLICY_VERSION_BOUND
  IF matrix.policy_version != active_policy.version
  THEN REJECT with "Policy version mismatch"

RULE: ESCALATION_NON_BYPASSABLE
  IF escalation_condition_met == TRUE
  THEN REQUIRE human_review
  AND BLOCK auto_approval

4. Escalation Logic - Operational Mapping

Escalation Triggers

Trigger Type Condition Threshold Example Action
Confidence confidence < threshold confidence < 0.85 Mandatory review
Uncertainty uncertainty > threshold uncertainty > 0.20 Mandatory review
Bias Flag bias_score > threshold bias_score > 0.10 Mandatory review
Domain Risk risk_category IN high_risk Life-safety, legal, financial Mandatory review
Anomaly input NOT IN distribution Out-of-distribution detected Mandatory review
Manual human_request == TRUE Any user request Review initiated

Escalation Sequence

Escalation State Machine

STATE: AI_PROCESSING
  ON trigger_detected:
    FREEZE ai_output
    SET status = ESCALATED
    TRANSITION TO AWAITING_REVIEW

STATE: AWAITING_REVIEW
  REQUIRE human_reviewer_assignment
  PROVIDE full_context {
    ai_recommendation
    confidence_scores
    uncertainty_bounds
    escalation_reason
    policy_rules
  }
  ON human_decision:
    LOG to responsibility_matrix
    TRANSITION TO DECISION_LOGGED

STATE: DECISION_LOGGED
  SET matrix.human_decision
  SET matrix.human_reviewer_id
  SET matrix.review_timestamp
  COMPUTE matrix.checksum
  TRANSITION TO COMPLETE

STATE: COMPLETE
  OUTPUT is now valid
  RESPONSIBILITY assigned to human_reviewer
  RECORD immutable

Non-Bypassable Constraint

Escalation cannot be disabled, bypassed, or overridden by configuration. No administrative privilege permits skipping escalation when trigger conditions are met. This is enforced at the system architecture level.

5. Audit & Compliance Mapping

Framework artifacts map to regulatory requirements. This section provides traceability, not legal interpretation.

Regulation Requirement Framework Artifact Evidence
FDA CDS Human oversight of clinical decisions Responsibility Matrix human_reviewer_id, human_decision fields
GDPR Art. 22 Right to human review of automated decisions Escalation Logic Mandatory review for high-risk; escalation logs
EU AI Act High-risk system oversight Role Enforcement Human role cannot be bypassed; audit trail
EU AI Act Transparency and traceability Responsibility Matrix Complete decision chain with timestamps
SOC 2 Processing integrity Immutable Logging Append-only records with checksums
SOC 2 Change management Policy Versioning Version history with approval records
HIPAA Audit controls Audit Export Exportable logs with access controls
OCC/Fed Model risk management Escalation + Matrix Uncertainty quantification; human override

Audit Evidence Availability

6. Language Protocol - Enforcement Layer

Language constraints are deployable governance rules. Mislabeling creates legal exposure.

Prohibited Phrases

Prohibited Risk Required Alternative
"The AI decided" Implies AI authority "The AI recommended"
"System approved" Implies automated approval "Reviewer approved"
"Algorithm determined" Implies AI judgment "Analysis indicated"
"Automated decision" Implies no human "AI-assisted recommendation"
"AI concluded" Implies AI reasoning "AI identified"

Enforcement Points

Language Validation Rule

FUNCTION validate_language(text):
  prohibited = [
    "AI decided", "system approved", "algorithm determined",
    "automated decision", "AI concluded", "machine judgment"
  ]
  FOR phrase IN prohibited:
    IF phrase IN text.lower():
      RETURN {valid: FALSE, violation: phrase}
  RETURN {valid: TRUE}

7. Export & Regulator Packages

Standard Export Bundles

Package Contents Use Case
Decision Audit Responsibility matrices for date range; escalation logs; policy versions Regulatory inquiry; internal audit
Escalation Report All escalations with triggers, resolutions, timing Compliance review; process audit
Policy History Complete version history with change rationale Change control audit; timeline reconstruction
Role Activity Actions by role type; reviewer activity; override frequency Operational review; capacity planning
Full Compliance All above packages combined with integrity proofs Regulatory examination; legal discovery

Export Metadata

All exports include:

Export Guarantee

Exports contain sufficient information for external review without requiring system access. A regulator can verify accountability chains using export data alone.

8. Deployment Checklist (Institutional)

All items must be satisfied before production deployment. This is risk gating, not feature enablement.

Roles Declared: All four roles (AI, Human, Policy, Institution) formally assigned with named owners
Policies Versioned: Initial policy version created, approved, and logged with version 1.0
Escalation Tested: All escalation triggers verified with test cases; bypass attempts confirmed blocked
Audit Logging Verified: Responsibility matrix creation confirmed; immutability tested; export validated
Language Protocol Enforced: Prohibited phrases blocked in UI, logs, reports; validation middleware active
Human Review Path Tested: End-to-end flow from AI output to human decision verified functional
Export Packages Generated: Test exports created and validated for completeness
Retention Configured: Data retention period set per regulatory requirements; backup verified
Legal Review Complete: Institutional counsel has reviewed deployment configuration
Accountability Sign-off: Institutional owner has formally accepted liability for deployment

Gate Requirement

Production deployment is blocked until all checklist items are completed and documented. Incomplete deployments are non-compliant.

9. Change Control & Versioning

Change Categories

Category Examples Review Required Approval Level
Frozen Core invariant; role definitions; escalation non-bypass Not changeable N/A - Immutable
Major New escalation triggers; role permission changes Full review Institutional + Legal
Minor Threshold adjustments; domain-specific rules Standard review Policy owner
Administrative User assignments; documentation updates Logged only Authorized admin

Version Control Requirements

Governance Drift Warning

Governance drift (gradual weakening of controls through incremental changes) increases institutional risk. All changes must be reviewed for cumulative effect on accountability guarantees.