
The Confidential AI Gateway

OLLM.COM is a privacy-first AI gateway that aggregates multiple large language model (LLM) providers behind a single API. It is designed for organizations and developers who require strong privacy guarantees, data control, and the ability to choose between standard zero-retention infrastructure and confidential computing.
The platform uses a zero-knowledge architecture that enforces zero data visibility, zero data retention, and no training use of customer inputs. For workloads that require encryption-in-use, OLLM.COM supports confidential computing on trusted execution environments (TEEs) such as Intel SGX and NVIDIA confidential computing, and provides cryptographic proof that requests were processed inside a TEE.
OLLM.COM provides a unified API to select a model and a security mode for each request. In the standard mode, the platform enforces zero data retention and prevents training use while maintaining encryption in transit and at rest. In confidential computing mode, requests are processed inside a trusted execution environment, enabling encryption-in-use so plaintext is not exposed to the host.
After processing in a TEE, OLLM.COM provides cryptographic evidence that the request executed inside the enclave, enabling verifiable privacy. The platform integrates with popular development tools, allowing teams to adopt the gateway without introducing new IDEs or custom setup. OLLM also references an “Origin” capability for intelligent automation and persistent context management to support ongoing development workflows.
| Option | Data Retention | Encryption Scope | Cryptographic Proof (TEE Attestation) | Typical Fit |
|---|---|---|---|---|
| Standard (ZDR) | Zero data retention | In transit and at rest | Not provided | General development, evaluation, and internal apps requiring ZDR |
| Confidential Computing (TEE) | Zero data retention | In transit, at rest, and during processing (in-use) | Provided | Regulated, high-sensitivity, or privacy-critical workloads |