A
An Lạc ™
State of balanced Human-AI harmony achieved through sustainable equilibrium. An Lạc represents the philosophical goal where technology serves human flourishing without replacement or domination. Derived from Vietnamese/Chinese characters meaning "peace" (安) and "joy/well-being" (樂).
APR ™
Complementary metric measuring AI's participation level in decision-making. APR is mathematically constrained by the DAHEM conservation law to equal 1 minus HAI. Higher APR values indicate greater AI involvement, triggering Sentinel Shield when domain thresholds are exceeded.
C
Cognitive Sovereignty ™
Human right to maintain meaningful authority over AI-assisted decisions. Cognitive Sovereignty ensures humans retain control over their decision-making processes even when leveraging AI capabilities. This principle forms the ethical foundation of AI-Balance, aligning with EU AI Act and GDPR requirements.
D
DAHEM ™
Core conservation law ensuring transparent authority distribution between humans and AI systems. DAHEM establishes that at any moment, the sum of Human Authority Index (HAI) and AI Participation Ratio (APR) must equal unity within epsilon tolerance (0.001). This mathematical constraint prevents opaque AI decision-making.
DI ™
Metric measuring drift from intended Human-AI authority balance. DI quantifies the absolute difference between actual and intended HAI values, serving as an early warning system for authority violations. Critical DI levels trigger Sentinel Shield activation.
Warning: 0.10 ≤ DI < 0.20 (approaching critical)
Critical: DI ≥ 0.20 (Sentinel Shield activates)
F
f(ε) Function
Mathematical model predicting epistemic drift in AI systems over time. The f(ε) function uses six validation parameters to forecast DI values, enabling proactive governance rather than reactive intervention. Epsilon (ε = 0.001) represents the strict tolerance for DAHEM conservation law.
H
HAI ™
Primary metric measuring the degree to which a human maintains decision-making authority in an AI-assisted interaction. HAI ranges from 0.0 (theoretical full AI autonomy) to 1.0 (pure human decision with no AI involvement). Domain-specific minimum thresholds ensure appropriate human oversight.
K
K# Protocol ™
Deterministic Vietnamese ↔ Latin encoding system preserving tonal semantics in AI processing. K# achieves 53% token reduction compared to standard Vietnamese while maintaining 100% lossless reversal. Built on KHD_CAP™ Matrix (18 tone-character mappings), enabling Zero-Shimmer communication.
K# Encoding: "SinK chaDoK, toiK laD BaK FucH"
R
RCL ™
Middleware governance system implementing AI-Balance standards across any Large Language Model. RCL provides real-time monitoring, metric calculation, Sentinel Shield integration, and audit trail logging. Functions as an API wrapper enabling governance-by-design.
Ri-Equi ™
Measurable equilibrium state where humans and AI systems preserve authentic nature while collaborating effectively. Ri-Equi is achieved when DI < 0.10 and HAI meets domain minimum thresholds. Represents the practical manifestation of An Lạc philosophy.
RL-Law ™
Constitutional framework of 13 core principles governing AI behavior and Human-AI interaction. RL-Law establishes boundaries ensuring AI systems operate transparently, preserve human authority, and respect cognitive sovereignty. Provides philosophical foundation for AI-Balance metrics.
Transparency: AI must declare participation level
Reversibility: Humans can override AI at any time
S
SAP ™
Structured parsing format enabling deterministic validation of AI interactions. SAP breaks down requests into Subject (who/what acts), Action (what is done), and Parameter (context/constraints), allowing Trinity Filter to verify syntactic correctness and authority compliance.
SAP: Subject="User", Action="RequestAdvice", Parameter="MedicalDomain"
Triggers: Medical threshold check (HAI ≥ 0.75 required)
Sentinel Shield ™
Real-time override mechanism preventing Human-AI authority violations. Sentinel Shield activates when: (1) DI exceeds critical threshold (≥0.20), (2) APR surpasses domain maximum, (3) Trinity Filter detects unsafe decisions, or (4) human operator manually triggers. Pauses AI response and requires explicit override approval.
T
Trinity Filter ™
Three-layer validation system preventing AI hallucinations and authority violations. Trinity Filter processes every AI response through: (1) Syntactic Layer (SAP parsing), (2) Semantic Layer (logical coherence), and (3) Authority Layer (HAI/APR compliance). Only responses passing all three layers proceed to user.
Layer 2: Check semantic coherence → No contradictions detected
Layer 3: Verify HAI=0.78 ≥ 0.75 (medical threshold) → Approved
Result: Response delivered to user
Z
Zero-Shimmer Principle ™
Elimination of ambiguity in Human-AI authority distribution. "Shimmer" represents uncertainty or opacity about who holds decision-making power. Zero-Shimmer achieved through transparent HAI/APR disclosure, deterministic DAHEM calculations, and SAP-based structured communication. Users always know authority distribution at any moment.
Zero-Shimmer AI: "AI suggested X (APR=0.35). You retain 65% authority (HAI=0.65). Approve, modify, or reject."
Non-Delegability Principle
Core RL-Law principle establishing that humans cannot fully transfer decision-making authority to AI systems. Even in high-automation scenarios, HAI never reaches 0.0 - minimum human authority must be preserved. Ensures accountability and prevents "algorithmic abdication" of responsibility.