UniversalAPI Security Architecture
April 2, 2026 · Security · 5 min read
When you connect your API keys and run AI agents on a shared platform, you need to know exactly how your data is protected. This page gives you the full picture in under five minutes.
Looking for the full technical deep dive?
This is the concise overview. For implementation details, code examples, and architecture diagrams, read the complete security deep dive.
At a Glance
Every request passes through 7 security layers before any user-authored code executes:
| Layer | What It Does |
|---|---|
| 1. TLS + API Gateway | Encrypts all traffic in transit; rate-limits requests |
| 2. Centralized Authorizer | Validates every token server-side before anything else runs |
| 3. Credential Injection | Your API keys are stored encrypted, injected server-side, and cleaned up after every request |
| 4. Runtime Sandbox | All user code runs in a hardened sandbox — no shell, no filesystem, no raw network |
| 5. Hardware Tenant Isolation | Each user gets a dedicated Firecracker microVM — separate CPU, memory, and disk |
| 6. Log Sanitization | Secrets are automatically redacted from all logs at both write-time and read-time |
| 7. Data Privacy | Full account deletion (GDPR/CCPA), no plaintext credential storage |
The core principle: you present a token. Everything else — validation, key injection, sandboxing, isolation — happens on our servers. Your client code never touches API keys, never manages OAuth flows, and never shares a runtime with another user.
1. Your Keys Never Leave Our Servers
This is the #1 concern for most users, so let's address it directly.
How it works:
- You store your third-party API keys (SerpAPI, OpenAI, AWS, Google, etc.) once via the Credentials page or API.
- Keys are stored encrypted in DynamoDB. They are never sent to your client or browser.
- When an MCP server or agent runs, the platform injects your keys as environment variables server-side — your code just reads
process.env.SERPAPI_KEY. - After every execution, all injected keys are removed from the environment. Keys from one request never leak into the next, even on a warm Lambda instance.
OAuth tokens (Google, Microsoft, GitHub) are refreshed automatically server-side. You authenticate once; the platform handles token refresh forever. You never see or manage refresh tokens.
Author-provided keys follow the same pattern. When an MCP server author includes their own API keys (the "keys-included" model), those keys are injected with higher priority than yours — but the same cleanup rules apply. Neither you nor the author's code can extract the other's keys.
2. Every Request Is Authenticated Server-Side
A single PublicAuthorizer Lambda sits in front of every API endpoint. No request reaches any backend handler without passing through it first.
What the authorizer does on every request:
- Identifies the token type (user token, role token, Cognito JWT, or anonymous)
- SHA-256 hashes the token and compares it against stored hashes — we never store tokens in plaintext
- Checks token status (revoked? expired?), verifies credit balance
- Resolves the user's API keys and refreshes any expiring OAuth tokens
- Builds a validated auth context that the backend reads from headers
Zero-trust design: The backend Lambda never validates tokens itself. It only reads the pre-validated context that the authorizer already verified. There is one place to get auth right, not dozens.
Token types at a glance:
| Token | Format | Use Case |
|---|---|---|
| User token | uapi_ut_* | Programmatic API access — acts as you, all your keys injected |
| Role token | uapi_rt_* | Scoped access — carries its own specific key set |
| Cognito JWT | eyJ... | Browser sessions — validated via JWKS (RS256) |
| Anonymous | (none) | Public endpoints only — IP rate-limited |
3. All Code Runs in Hardened Sandboxes
MCP servers (JavaScript) and AI agents (Python) both execute user-authored code. That code runs in a sandbox that blocks anything dangerous.
MCP servers (Node.js) cannot:
- Spawn child processes or workers
- Open raw network sockets
- Access the filesystem directly
- Read platform AWS credentials or internal config from the environment
AI agents (Python) cannot:
- Call
eval(),exec(),compile(), or__import__() - Import
subprocess,socket,ctypes, orimportlib - Run shell commands via
os.system()oros.popen() - Write files outside
/tmp - Read sensitive system paths (
/proc/self/environ,/var/runtime/, etc.) - Access
os.environdirectly — they see a filtered proxy that hides platform internals
Both runtimes strip platform environment variables (AWS credentials, table names, handler paths) before user code executes. Your code sees only its injected third-party keys.
4. Hardware-Level Tenant Isolation
Software sandboxes are the first line of defense. Hardware isolation is the second.
Both MCP servers and AI agents run with AWS Lambda's Tenant Isolation Mode (PER_TENANT). This means each user token gets its own dedicated Firecracker microVM — a lightweight virtual machine with:
- Separate CPU and memory — no shared compute between tenants
- Separate
/tmpdisk — no shared filesystem - Separate execution environment — no shared global variables or in-memory caches
What this prevents: Even if an attacker somehow escaped the software sandbox, they'd land in a microVM that contains only their own data. No other user's keys, tokens, or execution state are present. Side-channel attacks, timing attacks, and memory inspection across tenants are all blocked at the hypervisor level.
The tenant ID is the token itself (not just the user ID), so different API tokens from the same user get separate microVMs.
5. Transparent Usage Metering
AI agents can call Amazon Bedrock models (Claude, Nova, Titan, etc.) using the platform's credentials. Every Bedrock API call is transparently metered:
- A proxy layer intercepts all
boto3.client('bedrock-runtime')calls during agent execution - Token counts and costs are extracted from every response
- A model allowlist prevents calls to unsupported models — no surprise bills
- All usage is itemized in your request logs and billing dashboard
Agent authors write normal boto3 code. The metering is invisible to them but ensures every dollar of compute is tracked and billed accurately.
6. Secrets Never Appear in Logs
Request logs are visible to users for debugging. Secrets must never appear in them.
UniversalAPI uses two-layer log sanitization:
Write-time (primary defense): Before any log record is written to DynamoDB, all known secret values (API keys, tokens, passwords) are replaced with
[REDACTED]. This layer has access to the actual secret values, so it catches exact matches — even if a key appears in an unexpected field.Read-time (defense-in-depth): When logs are read back via the API, a second pass applies pattern-based redaction. This catches anything that looks like a secret (API key patterns, Bearer tokens, AWS credentials) even if it wasn't in the known-secrets list at write time.
Both layers run automatically. You don't need to configure anything — secrets are stripped before they ever reach your screen.
7. Data Privacy and Account Deletion
- No plaintext credential storage. Tokens are SHA-256 hashed at rest. API keys are encrypted in DynamoDB.
- Full account deletion.
DELETE /user/accountpurges all user data — account record, API keys, tokens, agents, MCP servers, conversation history, usage logs, and stored files. This is a hard delete, not a soft disable. - GDPR/CCPA compliant. Account deletion is immediate and comprehensive. No data is retained after deletion.
- Minimal data collection. We store what's needed to run the platform (user ID, email, credentials, usage logs) and nothing more.
Quick Reference
| Concern | How We Handle It |
|---|---|
| API key exposure | Encrypted at rest, injected server-side, cleaned up after every request |
| Token theft | SHA-256 hashed at rest — database breach doesn't yield usable tokens |
| Cross-tenant data leakage | Hardware-isolated Firecracker microVMs per token |
| Sandbox escape | Even if escaped, attacker lands in their own isolated microVM |
| Secrets in logs | Two-layer redaction (write-time exact match + read-time pattern match) |
| OAuth token management | Automatic server-side refresh — you authenticate once |
| Unauthorized Bedrock usage | Model allowlist + transparent per-call metering |
| Data retention | Full hard-delete on account deletion (GDPR/CCPA) |
| MCP auth fragmentation | Single centralized authorizer — one place to get auth right |
Want the Full Technical Details?
This overview covers what we do. For how it works — including code examples, architecture diagrams, the token validation flow, sandbox implementation details, and our comparison to the MCP OAuth spec — read the complete authentication and security deep dive.
Ready to try it? Sign up for UniversalAPI — it's free to start, and your first 100 credits are on us. Browse the MCP server catalog or create your first agent.