Authentication
API keys, header precedence, rotation, the kill switch, and rate-limit signals.
The Partner API authenticates with one thing: an API key bound to your Layers organization. Every request carries it. Everything else — what you can do, how fast, who you can do it on behalf of — is read from the key on the server.
Key shape
lp_<env>_<key_id>_<secret>| Segment | What it is |
|---|---|
lp_ | Constant prefix. Easy to spot in logs, easy to grep for, easy to pattern-match when scanning for accidental commits. |
<env> | live or test. Both envs authenticate against the same org wallet and platform integrations today — see Sandbox & test keys for the full isolation story. |
<key_id> | 16-character base32 handle. Public, log-safe. Used for rate-limit attribution and key lookup. |
<secret> | 43-character base64url remainder. Sensitive. Stored as a bcrypt hash (cost factor 12); we can never recover the plaintext. |
Treat the whole string as a secret. The key_id portion (lp_live_01HX9Y6K7EJ4T2AB) is safe to log on its own; the full key never is.
If a key leaks, hit the kill switch (below) immediately, then rotate.
Where to put the header
The primary form is X-Api-Key: <key>.
X-Api-Key: lp_live_01HX9Y6K7EJ4T2AB_4QZpN...remainderAuthorization: Bearer <key> is accepted as a fallback for clients that can't set custom headers. If both are sent the server prefers X-Api-Key.
curl https://api.layers.com/v1/whoami \
-H "X-Api-Key: $LAYERS_API_KEY"await fetch("https://api.layers.com/v1/whoami", {
headers: { "X-Api-Key": process.env.LAYERS_API_KEY! },
});import axios from "axios";
const layers = axios.create({
baseURL: "https://api.layers.com",
headers: { "X-Api-Key": process.env.LAYERS_API_KEY! },
});
await layers.get("/v1/whoami");import os, requests
session = requests.Session()
session.headers["X-Api-Key"] = os.environ["LAYERS_API_KEY"]
session.get("https://api.layers.com/v1/whoami")Scopes (planned)
Scopes are not yet enforced. Partner keys currently carry org-level access and /v1/whoami returns scopes: []. The vocabulary below describes the planned model so you can shape your code around it, but no route currently rejects requests for a missing scope.
Every key is created with a list of scopes. Once enforcement lands, a request that hits the wrong scope will get 403 FORBIDDEN_SCOPE back — no retry helps; the key needs to be re-issued with the right scope set.
| Scope | Lets you |
|---|---|
projects:read / projects:write | List, read, create, patch, archive projects. |
ingest:write | Kick off GitHub, website, and App Store ingest jobs. |
content:read / content:write / content:approve | Read containers, generate or regenerate, approve or reject. |
social:read / social:write | List connected accounts; create OAuth URLs; revoke. |
publish:write | Schedule, publish, reschedule, cancel scheduled posts. |
events:read (optional +pii sub-scope) | Read the SDK event stream. PII fields are redacted unless +pii is granted. |
metrics:read | Read organic and ads metrics, top performers, ads-content. |
ads:read / ads:write | Read ad accounts, campaigns, adsets, ads. Write is planned. |
influencers:write | Create, clone, patch influencers. |
leased:write | Submit lease requests, list assigned leased accounts, release. |
engagement:write | Read and patch the auto-pilot engagement config. |
github:admin | Register a GitHub installation, list repos. |
jobs:read / jobs:cancel | Poll and cancel jobs. |
Partner-tier keys get all of the above by default. Self-serve scope provisioning is planned.
Per-customer scoping
One key. Many customers. Each end-customer is a project, and you pin which one a call is for via the path — /v1/projects/:projectId/... is implicitly scoped to a single project. The server checks the project belongs to your org and returns 404 NOT_FOUND otherwise (we don't leak existence with a 403).
If you need a belt-and-suspenders check against your own customer-external-id, read the project first via GET /v1/projects/:projectId and assert customerExternalId matches what your code thinks it should be before issuing follow-up calls.
Rotation
Keys don't expire on a schedule by default — rotate when you have a reason (employee left, leak suspected, regular hygiene cadence). The rotation pattern:
- Ask your Layers contact to create a second key with the same access.
- Roll it out across your services. Keep the old key live during the cutover.
- Watch
last_used_aton the old key via the Layers admin. Once it stops moving for a full deploy cycle, revoke it. - If you're nervous, kill-switch the old key first (instant) and only revoke after a soak.
The two-key-overlap window is the safest pattern. There is no way to "rotate the secret in place" — the old key is gone the moment it's revoked, and any in-flight request still using it gets 401 UNAUTHENTICATED.
Kill switch
If a key is exposed — committed to a public repo, leaked in a screenshot, anything — flip the kill switch first and ask questions later.
- Per key. Layers can flip
kill_switch=trueon a single key. Every subsequent request returns503 KILL_SWITCHimmediately, no retry. Reads, writes, polls — all of it. Killed keys can be un-killed; revoked keys are gone. - Per organization.
organizations.api_access_revoked = truecuts every key on the org at once. Useful if you don't know which key leaked. - Global. A platform-wide kill exists for incident response. You'll see
503 KILL_SWITCHacross the board if it ever fires; check the support before paging us.
There is no programmatic kill-switch endpoint today — email or Slack your Layers contact. The flip is instant on our side; the cache invalidation propagates within a minute.
Rate limits
Every key has a tier. standard is the default; higher tiers are provisioned by Layers for enterprise partners. Buckets are keyed per (api_key_id, endpoint_class) — a noisy generation endpoint won't starve your read traffic, but it can starve other writes on the same key.
| Tier | Typical provisioning |
|---|---|
standard | Default for all partner keys. |
pilot | Granted for early-integration partners with planned higher throughput. |
partner | Enterprise tier for GA partners with SLAs. |
Hit a limit and you get 429 RATE_LIMITED. The signals:
Retry-Afterheader (seconds).X-RateLimit-Limit/X-RateLimit-Remaining/X-RateLimit-Resetheaders — bucket state.X-RateLimit-Endpoint-Class: read-light | write-light | long-running— which bucket you hit.X-RateLimit-Tier: standard— the tier in effect.- Body:
{ "error": { "code": "RATE_LIMITED", "requestId": "req_...", "details": { "endpointClass": "write-light", "retryAfterMs": 1240 } } }.
Honor Retry-After. See rate limits for the full bucket table and 429 envelope.
Comparing keys safely
If you pass keys around between services and ever need to compare them in code (rare, but it happens with shared-key fixtures in tests), use a constant-time compare. crypto.timingSafeEqual in Node, hmac.compare_digest in Python. A naïve == leaks the secret one byte at a time over the network and is the kind of bug that fails a security review.
Best practices
- Store the key in your secret manager. Never in source, never in env files committed to git, never in screenshots.
- Use
_testkeys in CI and local dev. Reserve_livefor production deploys. - Create one key per integration, not one key per developer. Easier to rotate, easier to attribute usage.
- Set up a dashboard on the
429rate so a quiet drift toward your ceiling doesn't surprise you. - For idempotent retries on POSTs, see Common patterns → idempotency.