Comprehensive Reference

AI-Balance™ Glossary

Complete definitions of all core concepts, metrics, and principles
in the AI-Balance governance framework

A

An Lạc

安樂 (Peace & Well-being)

State of balanced Human-AI harmony achieved through sustainable equilibrium. An Lạc represents the philosophical goal where technology serves human flourishing without replacement or domination. Derived from Vietnamese/Chinese characters meaning "peace" (安) and "joy/well-being" (樂).

Example:
A medical AI system operating at HAI=0.80 achieves An Lạc by maintaining physician authority while providing valuable diagnostic assistance.

APR

AI Participation Ratio

Complementary metric measuring AI's participation level in decision-making. APR is mathematically constrained by the DAHEM conservation law to equal 1 minus HAI. Higher APR values indicate greater AI involvement, triggering Sentinel Shield when domain thresholds are exceeded.

Formula:
APR = 1 - HAI
Example:
If HAI = 0.65 (human retains 65% authority), then APR = 0.35 (AI participates at 35% level). Maximum APR in medical domain is 0.25.

C

Cognitive Sovereignty

Human right to maintain meaningful authority over AI-assisted decisions. Cognitive Sovereignty ensures humans retain control over their decision-making processes even when leveraging AI capabilities. This principle forms the ethical foundation of AI-Balance, aligning with EU AI Act and GDPR requirements.

Example:
A patient receiving AI-generated treatment recommendations maintains cognitive sovereignty when the system clearly discloses HAI/APR metrics and allows override at any time.

D

DAHEM

Dynamic AI-Human Equilibrium Model

Core conservation law ensuring transparent authority distribution between humans and AI systems. DAHEM establishes that at any moment, the sum of Human Authority Index (HAI) and AI Participation Ratio (APR) must equal unity within epsilon tolerance (0.001). This mathematical constraint prevents opaque AI decision-making.

Conservation Law:
HAI + APR = 1.0 ± 0.001
Example:
System calculates HAI=0.628, automatically derives APR=0.372. Total = 1.000 (within ε tolerance). If HAI increases to 0.700, APR decreases to 0.300 to maintain conservation.

DI

Deviation Index

Metric measuring drift from intended Human-AI authority balance. DI quantifies the absolute difference between actual and intended HAI values, serving as an early warning system for authority violations. Critical DI levels trigger Sentinel Shield activation.

Formula:
DI = |HAIactual - HAIintended|
Thresholds:
Healthy: DI < 0.10 (less than 10% deviation)
Warning: 0.10 ≤ DI < 0.20 (approaching critical)
Critical: DI ≥ 0.20 (Sentinel Shield activates)

F

f(ε) Function

Epsilon Function / Drift Prediction Model

Mathematical model predicting epistemic drift in AI systems over time. The f(ε) function uses six validation parameters to forecast DI values, enabling proactive governance rather than reactive intervention. Epsilon (ε = 0.001) represents the strict tolerance for DAHEM conservation law.

Purpose:
Predict DIfuture based on current metrics
Application:
Production deployment: f(ε) predicts if system will exceed DI thresholds in next N interactions, allowing preemptive adjustment before Sentinel Shield activation required.

H

HAI

Human Authority Index

Primary metric measuring the degree to which a human maintains decision-making authority in an AI-assisted interaction. HAI ranges from 0.0 (theoretical full AI autonomy) to 1.0 (pure human decision with no AI involvement). Domain-specific minimum thresholds ensure appropriate human oversight.

Formula:
HAI = 1 - APR
Example:
Medical domain requires HAI ≥ 0.75 (human retains minimum 75% authority). If system calculates HAI = 0.80, human maintains strong authority while benefiting from 20% AI assistance.

K

K# Protocol

Ri-Lingua™ | Vietnamese AI Encoding System

Deterministic Vietnamese ↔ Latin encoding system preserving tonal semantics in AI processing. K# achieves 53% token reduction compared to standard Vietnamese while maintaining 100% lossless reversal. Built on KHD_CAP™ Matrix (18 tone-character mappings), enabling Zero-Shimmer communication.

Example Encoding:
Vietnamese: "Xin chào, tôi là Ba Phúc"
K# Encoding: "SinK chaDoK, toiK laD BaK FucH"

R

RCL

Resontologic™ Control Layer

Middleware governance system implementing AI-Balance standards across any Large Language Model. RCL provides real-time monitoring, metric calculation, Sentinel Shield integration, and audit trail logging. Functions as an API wrapper enabling governance-by-design.

Implementation:
RCL wraps GPT/Claude/Gemini API calls, calculating HAI/APR/DI before returning responses. Logs all metrics with timestamps for compliance audits. Activates Sentinel Shield when thresholds exceeded.

Ri-Equi

Resontologic Equilibrium

Measurable equilibrium state where humans and AI systems preserve authentic nature while collaborating effectively. Ri-Equi is achieved when DI < 0.10 and HAI meets domain minimum thresholds. Represents the practical manifestation of An Lạc philosophy.

Visual Metaphor:
Balance scale (⚖️) with human authority on left, AI participation on right. Ri-Equi achieved when scale balances at domain-appropriate ratio, not necessarily 50/50.

RL-Law

Resontologic Law

Constitutional framework of 13 core principles governing AI behavior and Human-AI interaction. RL-Law establishes boundaries ensuring AI systems operate transparently, preserve human authority, and respect cognitive sovereignty. Provides philosophical foundation for AI-Balance metrics.

Key Principles:
Non-Delegability: Humans cannot fully delegate authority to AI
Transparency: AI must declare participation level
Reversibility: Humans can override AI at any time

S

SAP

Subject-Action-Parameter

Structured parsing format enabling deterministic validation of AI interactions. SAP breaks down requests into Subject (who/what acts), Action (what is done), and Parameter (context/constraints), allowing Trinity Filter to verify syntactic correctness and authority compliance.

Example Parse:
User request: "Recommend treatment for pneumonia"
SAP: Subject="User", Action="RequestAdvice", Parameter="MedicalDomain"
Triggers: Medical threshold check (HAI ≥ 0.75 required)

Sentinel Shield

Real-time override mechanism preventing Human-AI authority violations. Sentinel Shield activates when: (1) DI exceeds critical threshold (≥0.20), (2) APR surpasses domain maximum, (3) Trinity Filter detects unsafe decisions, or (4) human operator manually triggers. Pauses AI response and requires explicit override approval.

Activation Example:
Medical AI suggests treatment (APR climbing to 0.30). Domain max is 0.25. Sentinel Shield immediately pauses, alerts physician: "Authority threshold exceeded. Confirm to proceed or modify request."

T

Trinity Filter

Three-layer validation system preventing AI hallucinations and authority violations. Trinity Filter processes every AI response through: (1) Syntactic Layer (SAP parsing), (2) Semantic Layer (logical coherence), and (3) Authority Layer (HAI/APR compliance). Only responses passing all three layers proceed to user.

Validation Flow:
Layer 1: Parse SAP structure → Valid
Layer 2: Check semantic coherence → No contradictions detected
Layer 3: Verify HAI=0.78 ≥ 0.75 (medical threshold) → Approved
Result: Response delivered to user

Z

Zero-Shimmer Principle

Elimination of ambiguity in Human-AI authority distribution. "Shimmer" represents uncertainty or opacity about who holds decision-making power. Zero-Shimmer achieved through transparent HAI/APR disclosure, deterministic DAHEM calculations, and SAP-based structured communication. Users always know authority distribution at any moment.

Philosophy:
Traditional AI: "The AI recommended X" (shimmer: unclear who decided)
Zero-Shimmer AI: "AI suggested X (APR=0.35). You retain 65% authority (HAI=0.65). Approve, modify, or reject."

Non-Delegability Principle

Core RL-Law principle establishing that humans cannot fully transfer decision-making authority to AI systems. Even in high-automation scenarios, HAI never reaches 0.0 - minimum human authority must be preserved. Ensures accountability and prevents "algorithmic abdication" of responsibility.

Application:
Even fully automated stock trading systems must maintain HAI > 0.0 (typically ≥0.05) allowing human circuit-breaker intervention. User cannot delegate 100% authority regardless of preference.