Every organization calling an AI API has accepted residual risk.
Almost none have done it formally.
GABA defines the framework for identifying, documenting, and accepting the risks at the AI inference boundary that no hardening eliminates -- explicitly, attributably, and immutably.
The gap no compliance framework covers.
SOC 2, HIPAA, PCI DSS, FedRAMP, ISO 27001 -- every compliance framework written before 2024 assumed that the systems an organization deploys are under that organization's control. They define controls for systems the organization owns, operates, and can audit.
AI inference is different. When a regulated organization calls an external AI provider, it is sending data to infrastructure it does not own, operated by personnel it cannot audit, under a legal jurisdiction it may not control.
The typical response is to treat this gap as invisible -- call the API, log the call, move on. The invisible acceptance is still an acceptance. The risk is present. When something goes wrong, the organization cannot demonstrate that it understood and deliberately accepted the risk.
GABA does not create new risk. It creates a formal record that the risk was identified, understood, and accepted by an authorized human before the deployment went live.
The residual risks that cannot be hardened away.
These risks exist at every external AI inference boundary regardless of how well the boundary is technically hardened. They must be named and accepted explicitly in every GABA attestation.
You can harden your system completely. You can never harden the provider. The AI inference boundary is the only place governance can exist. GABA owns that boundary.
Provider infrastructure
The physical and virtual infrastructure operated by the AI provider is outside the deploying organization's control. Vulnerabilities in provider infrastructure cannot be eliminated by the deploying organization.
Provider key management
Cryptographic keys used by the provider to sign and encrypt inference traffic are managed by the provider. The deploying organization cannot verify or rotate these keys.
Provider personnel
The provider's employees, contractors, and agents have access to provider systems. The deploying organization cannot audit or control provider personnel practices.
Provider government relationships
The provider may be subject to legal compulsion by government authorities -- including national security orders that may prohibit disclosure -- in any jurisdiction where the provider operates. The deploying organization cannot anticipate or prevent such compulsion.
Computer-use agent scope creep
Any AI agent authorized for computer-use within the governed boundary may, through prompt injection or adversarial input, be induced to take actions outside its intended scope. Scope is enforced by policy and monitoring, not by technical constraint.
Inference response integrity
The content of AI inference responses cannot be cryptographically verified as originating from an unmodified model. Response validation reduces but does not eliminate the risk of manipulated outputs.
Why logs, contracts, and vendor risk assessments are not enough.
Serious buyers will ask: if we already have logs, contracts, and vendor risk assessments, why do we need GABA?
Because none of those prove that a specific human knowingly accepted the specific risks of a specific AI boundary at a specific moment in time -- and cannot later deny it.
Logs prove activity. Contracts govern commercial terms. Vendor risk assessments document due diligence. None of them produce an immutable, attributed, time-bound record of conscious risk acceptance by an authorized signatory. That is what GABA produces. That is what auditors, regulators, and legal counsel will eventually require.
Logging is not governance. Policy is not enforcement. Encryption is not control. GABA is acknowledged lack of control, formally accepted -- and that is why it is the only artifact that satisfies the question auditors will eventually ask.
The AI Boundary Risk Acceptance Record.
The core artifact of GABA is the AI Boundary Risk Acceptance Record (ABRAR) — one per governed deployment, covering all authorized endpoints. It must be signed by an authorized human signatory and anchored to a Chandra Protocol context unit. The Chandra CU makes it unforgeable.
Organization
Full legal name of the deploying organization.
Deployment ID
Unique identifier for this governed deployment instance.
Assessment date
Date this record was created. ISO 8601 format.
Authorized signatory
Full name and title of the human accepting residual risk on behalf of the organization.
Endpoint registry
Complete list of all authorized external AI inference endpoints. Each entry: provider name, endpoint URL, certificate fingerprint, purpose.
Controls confirmed
Attestation that all applicable boundary hardening controls are active: certificate pinning, request signing, response validation, anomaly detection, circuit breaker, payload sanitization.
Computer-use status
Explicit statement of whether any AI agent with computer-use capability is authorized. If yes: each agent is individually named, scoped, and attested separately.
Residual risks accepted
Explicit enumeration of all residual risk classes accepted for each endpoint. Each risk named, not implied.
Chandra CU reference
Mandatory. The context unit ID of the Chandra Protocol record that makes this attestation immutable. An ABRAR without a Chandra CU reference is a document, not an attestation.
Review date
Date by which this attestation must be reviewed and re-signed. Recommended: 12 months or upon any change to the endpoint registry.
AI agent types require different attestation.
Not all AI endpoints present the same risk profile. GABA distinguishes four agent types with distinct attestation requirements.
Model receives text or structured input, returns text or structured output. No system access. One ABRAR per endpoint.
Model can invoke defined tools (search, database query, API calls) within explicit scope. Each tool explicitly scoped and logged.
Agent can operate keyboard, mouse, browser, filesystem, or any graphical interface. Scope formally bounded. All actions logged as Chandra CUs. Prompt injection risk formally documented. An AI coding agent on an unmanaged workstation reaching the governed boundary invalidates the CRC Governed Server Boundary prerequisite.
Agent retains state across sessions, can initiate actions without per-action human authorization. Mandatory Chandra CU on every autonomous action. Human review cadence formally specified.
GABA and the CRC Boundary pillar.
GABA is the attestation mechanism for the CRC Boundary pillar. Completing GABA attestation for all authorized endpoints achieves a Boundary pillar score of 4 -- the maximum under the CRC Minimum Surface Standard. Combined with the other three pillars, a Boundary score of 4 contributes to a total MSS of 16: full CRC compliance.
Hardening controls without GABA attestation achieve Boundary score 3 at most. The formal, Chandra-anchored, signatory-attributed record is what makes score 4.
The gap GABA fills has existed since the first regulated organization called an external AI API. Every organization that has done so without a formal residual risk acceptance record has accepted that risk implicitly. GABA makes it explicit.
A governed computing model for the AI era.
GABA does not stand alone. It is the final layer of a complete governed computing architecture:
CRC defines the shape of a safe system -- what must be true about your infrastructure before AI deployment is defensible.
Aegis Genera enforces the runtime boundary -- the governed execution substrate where the system actually runs, making GABA enforceable rather than theoretical.
Chandra Protocol makes every claim irreversible -- turning "we said we reviewed it" into "this exact human accepted these exact risks at this exact time, and cannot deny it."
GABA closes the loop -- the human acceptance of what cannot be controlled, formally declared at the only place governance can exist: the AI inference boundary.
Nothing important can happen without being constrained by CRC, executed in Aegis Genera, recorded by Chandra, and consciously accepted by a named human through GABA. That is a governed computing model for the AI era.
General Reasoning, Inc. · Birmingham, Alabama · MIT License · 2026
Enterprise inquiries: inquiries@genreason.com