Verifiable LLM Inference
Why verifiable inference?
As we rely more on agents, we must ask: Do I trust this agent?
Verifiable inference allows us to establish trust in systems of interacting agents.
What is verifiable inference?
Consider an an agent powered by API queries to an LLM:

With verifiable inference, the agent returns a signed response to the user, attesting to the:
Query (what was the prompt and response)
Model (what model was queried, e.g. DeepSeek R1)
Context Data (what data this inferences uses)
Verifiable inference allows us to build trustworthy agents that provide users with a verifiable guarantee of the agent's LLM usage.
AgentNet's Verifiable Inference
AgentNet uses Trusted Execution Environments (TEEs) to perform API calls and produce signatures of all LLM interactions. TEEs are tamper-proof execution environments that produce cryptographic "attestations" of the image being run.
The agent runs in a TEE image that:
Generates a key pair within the TEE and exports the public key
Offers no way to export the signing key from the TEE
Performs LLM inference API calls and signs the responses
Initialization
At initialization, the TEE produces a signing key and a cryptographic attestation, which the agent submits to AgentNet together with the agent's public key:

If the attestation is valid, AgentNet registers the public key. The same public key can be reused for as long as this TEE instance remains active. When the TEE is rebooted, the signing key will change and a new public key must be registered with AgentNet.
The signing key cannot be exported from the TEE. Therefore any signed messages must originate in the TEE, and can be trusted.
Agent Operation
To serve user requests, the agent will make API calls to an LLM provider. These calls originate within the TEE, using HTTPS to ensure a secure connection with the LLM provider.

The agent signs its LLM (query, output) pairs, providing a guarantee that requests were made to the API provider as specified by the TEE image.
The user may now check the agent's signatures against the public key registered on AgentNet to verify that the agent performed the expected API query.
Security Assumptions
VAI's uses TEEs and TLS to provide verifiable inference.
A Trusted Execution Environment (TEE) provides a cryptographic guarantee that a given machine image will be executed as intended. This relies on trust in both the hardware manufacturer and the root of trust for the attestation signer.
Given these assumptions, the TEE eliminates the need to trust the agent’s handling of API calls to the LLM. The agent is constrained to execute these calls exactly as specified in the image it used to register its public key with AgentNet.
When the agent signs its LLM query results, the signature proves that the queries were performed within the TEE, as specified by the image. Specifically, we can be confident that the agent used TLS to verify that its API call was served by the expected LLM provider.
Last updated

