Layers
Partner APIGetting started

Common patterns

The five patterns you'll hit on day one — idempotency, pagination, errors, async jobs, rate-limit headers.

View as Markdown

Five patterns repeat across the surface. Internalize them once and the rest of the API is just shapes. Each section below points to the deep doc when you need more.

Idempotency

Every mutating POST and PATCH accepts an Idempotency-Key header. Set one on every mutation you make — not most, every. Networks are messy, and a duplicate content_generate job costs credits.

curl -X POST https://api.layers.com/v1/projects \
  -H "X-Api-Key: $LAYERS_API_KEY" \
  -H "Idempotency-Key: 7c2f1a3e-0b4c-4a11-9f7e-33c0a2c1bd55" \
  -H "Content-Type: application/json" \
  -d '{ "name": "Acme Mobile", "customerExternalId": "acme_42", "timezone": "UTC" }'

The replay window is 24 hours. Inside that window:

  • Same key, same body — the original response is replayed verbatim. Same status, same body. Safe to retry on a network blip.
  • Same key, different body409 IDEMPOTENCY_CONFLICT. The server refuses to do two different things under one key. Create a new UUID for the new request.
  • No key — the call runs as a fresh request every time. You own the duplicate.

Use a freshly generated v4 UUID (or a ULID) per logical operation. If you're retrying inside a single logical operation, reuse the key. If you're starting a new operation, create a new one.

See Idempotency for the full semantics, including how it interacts with the jobs envelope.

Pagination

Every list endpoint is cursor-paginated. Pass cursor and limit; you get back items and a nextCursor when there's more.

curl "https://api.layers.com/v1/projects?limit=50" \
  -H "X-Api-Key: $LAYERS_API_KEY"
{
  "items": [
    { "id": "254a4ce1-f4ca-42b1-9e36-17ca45ef3d39", "name": "Acme Mobile" },
    { "id": "be52669f-af2e-4448-93b5-715cd5df8163", "name": "Beta Corp" }
  ],
  "nextCursor": "eyJpZCI6IjI1NGE0Y2UxIn0="
}

When nextCursor is null or absent, you've reached the end.

async function listAll<T>(path: string): Promise<T[]> {
  const items: T[] = [];
  let cursor: string | undefined;
  for (;;) {
    const url = new URL(`https://api.layers.com${path}`);
    url.searchParams.set("limit", "100");
    if (cursor) url.searchParams.set("cursor", cursor);
    const res = await fetch(url, {
      headers: { "X-Api-Key": process.env.LAYERS_API_KEY! },
    });
    const page = await res.json();
    items.push(...page.items);
    if (!page.nextCursor) return items;
    cursor = page.nextCursor;
  }
}

limit defaults vary per endpoint (commonly 25) and cap at 200. Cursors are opaque — don't parse them, don't generate them. They're only valid against the same query (changing status= mid-pagination invalidates the cursor).

Error shape

Errors are JSON, always. The status code tells you the family; the body tells you what to do about it.

{
  "error": {
    "code": "APPROVAL_REQUIRED",
    "message": "This content container needs approval before it can be scheduled.",
    "requestId": "req_RKT95R73PHHF5N1AMH9H2Q58MC",
    "details": {
      "containerId": "cnt_01HX9Y6K7EJ4T2ABCDEF",
      "approvalStatus": "pending"
    }
  }
}

Four fields:

  • code — drawn from a stable, finite set. Branch on this in your client. Never branch on message.
  • message — human-friendly. Subject to wording changes. Show it to humans, log it for triage.
  • requestId — echoes the X-Request-Id response header. Include this when you file a support ticket.
  • details — structured detail relevant to the code. Optional; present when there's something machine-readable to say.

The full set is documented in errors. The ones you'll hit most: UNAUTHENTICATED, NOT_FOUND, VALIDATION, IDEMPOTENCY_CONFLICT, RATE_LIMITED, APPROVAL_REQUIRED, PLATFORM_ERROR. A few — KILL_SWITCH, CIRCUIT_OPEN, CREDENTIAL_INVALID — mean stop and don't retry.

Every resource id in details is an opaque string. Some are bare UUIDs, some are prefixed (cnt_, sp_, inf_, job_). The full catalog — including the request-id and event-id formats — is in ID formats.

Async jobs

Anything that takes more than a couple seconds returns a job envelope instead of the result inline. The pattern is POST → 202 + jobId, then poll GET /v1/jobs/:jobId until status is terminal (completed, failed, or canceled).

POST /v1/projects/254a4ce1-f4ca-42b1-9e36-17ca45ef3d39/content
→ 202 Accepted
{
  "jobId": "job_01HX9Y6K7EJ4T2ABCDEF01234",
  "kind": "content_generate",
  "status": "running",
  "stage": "queued",
  "projectId": "254a4ce1-f4ca-42b1-9e36-17ca45ef3d39",
  "locationUrl": "/v1/jobs/job_01HX9Y6K7EJ4T2ABCDEF01234",
  "startedAt": "2026-04-18T17:03:02.000Z"
}

Recommended cadence: poll every five seconds with a little jitter, back off to thirty seconds after a minute of running. Stop polling the second you see a terminal status.

The full state machine, the per-kind stage vocabulary, and how cancellation works live in Jobs.

Rate-limit signals

Hit your tier's ceiling and you get 429 RATE_LIMITED. The signals:

  • Retry-After header (seconds). Honor it; don't math your own back-off.
  • X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset headers. Track these to back off before the limit, not after.
  • X-RateLimit-Endpoint-Class header: read-light | write-light | long-running. Tells you which bucket you're hitting.
  • X-RateLimit-Tier header: standard, pilot, etc.
  • error.details.retryAfterMs in the body. Same information as Retry-After, in milliseconds, easier to feed into a timer.
HTTP/1.1 429 Too Many Requests
Retry-After: 2
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1729273684
X-RateLimit-Endpoint-Class: write-light
X-RateLimit-Tier: standard
X-Request-Id: req_RKT95R73PHHF5N1AMH9H2Q58MC

{
  "error": {
    "code": "RATE_LIMITED",
    "message": "Rate limit exceeded for write operations on this key.",
    "requestId": "req_RKT95R73PHHF5N1AMH9H2Q58MC",
    "details": { "retryAfterMs": 1240, "endpointClass": "write-light" }
  }
}

Buckets are keyed (api_key_id, endpoint_class) — a noisy generation endpoint won't starve your read traffic, but it can starve other writes on the same key. The full tier table and bucket policy live in Rate limits.

On this page