Layers

Building an AI agent workflow with Layers

Wire the Partner API and SDK into an autonomous growth agent. Reference architecture, polling patterns, and sample code for the three main loops.

View as Markdown

Who this is for

You're building an AI product — an agent, an automation platform, a vertical SaaS that wants growth-as-a-feature for its own customers — and you want Layers running underneath. You want to programmatically onboard customers, generate content, publish posts, run ads, and read back performance without a human touching the Layers dashboard.

This playbook is the reference architecture. It covers the three loops that make up a working growth agent: the onboarding loop, the publish-to-learn loop, and the optimization loop. Sample code shows the polling patterns and webhook integration.

When to use this

  • You're an integrator, not an end-customer — you want Layers inside your product
  • You have engineering resources to wire API calls and background job processing
  • Your product has customers (or will) who want growth without managing it themselves
  • You understand that "AI agent" here means "automated workflow making API calls," not "LLM chatbot"

If you want to use Layers for your own app's growth (and you're not building a platform), start from first users with no budget or first ads on Meta and Apple Search instead. This playbook assumes you're building on top of Layers, not consuming it.

The three loops

An autonomous growth agent has three repeating work cycles:

  1. Onboarding loop — when a new customer signs up in your product, their Layers project comes online, their SDK is installed, their first content is generated, their social accounts are connected.
  2. Publish-to-learn loop — content gets generated on a cadence, published, measured. Top performers get cloned or amplified. Weak performers get killed.
  3. Optimization loop — ads run, metrics come back, budgets adjust, creative rotates. The Layers optimizer handles most of this automatically; your agent reads the outcomes and surfaces them to your customer.

Each loop has a polling interval, a webhook event it can react to, and a set of API calls it triggers. Let's walk through them.

Pre-work: authentication and the job envelope

Two concepts to understand before any code:

API keys are org-scoped. One Layers organization per partner. Inside your org, you create projects — one per end-customer. Your API key grants access to every project your org owns. See API keys.

Every long-running operation returns a job. Generate content? Returns a jobId. Clone a top performer? jobId. Ingest a GitHub repo? jobId. You poll GET /v1/jobs/:jobId (or the locationUrl returned with the job) until the status reaches completed. See Jobs.

Set this up once as a helper:

async function waitForJob(jobId: string, { timeout = 600_000, interval = 3_000 } = {}) {
  const deadline = Date.now() + timeout;
  while (Date.now() < deadline) {
    const res = await fetch(`https://api.layers.com/v1/jobs/${jobId}`, {
      headers: { Authorization: `Bearer ${process.env.LAYERS_API_KEY}` },
    });
    const job = await res.json();
    if (job.status === "completed") return job;
    if (job.status === "failed") throw new Error(`Job ${jobId} failed: ${job.error?.message}`);
    await new Promise((r) => setTimeout(r, interval));
  }
  throw new Error(`Job ${jobId} did not complete within ${timeout}ms`);
}

Every code sample below uses this helper.

Loop 1: onboarding

Triggered when your end-customer signs up in your product and opts into growth automation. The sequence:

  1. Create a Layers project for the customer
  2. Ingest their codebase (GitHub path) or App Store listing (mobile path) to extract brand context
  3. Provision SDK credentials and generate install instructions
  4. Connect their social accounts via OAuth
  5. Generate first content batch and route through approval

Sample code for the core flow (GitHub path):

// Step 1: Create the project
const projectRes = await fetch("https://api.layers.com/v1/projects", {
  method: "POST",
  headers: {
    Authorization: `Bearer ${process.env.LAYERS_API_KEY}`,
    "Idempotency-Key": crypto.randomUUID(),
    "Content-Type": "application/json",
  },
  body: JSON.stringify({
    name: customer.name,
    customerExternalId: customer.id, // your own ID, lets you look up later without storing Layers' UUID
    timezone: customer.timezone,
    primaryLanguage: customer.language ?? "en",
  }),
});
const project = await projectRes.json();

// Step 2: Ingest their GitHub repo (async — returns a job)
const ingestRes = await fetch(
  `https://api.layers.com/v1/projects/${project.id}/github/ingest`,
  {
    method: "POST",
    headers: {
      Authorization: `Bearer ${process.env.LAYERS_API_KEY}`,
      "Idempotency-Key": crypto.randomUUID(),
      "Content-Type": "application/json",
    },
    body: JSON.stringify({
      installationId: customer.githubInstallationId,
      repo: "acme/acme-app",
    }),
  },
);
const { jobId: ingestJobId } = await ingestRes.json();
const ingestResult = await waitForJob(ingestJobId);

The full walkthrough with all five steps is in onboard a customer end-to-end. That's the canonical reference — this playbook focuses on the agent-level orchestration around it.

Handling async gracefully

An onboarding flow can take anywhere from 30 seconds to 15 minutes depending on how much scaffolding is involved (repo size, content generation, OAuth flows on the customer side). Your agent has two patterns:

Pattern A: Synchronous polling with progress updates. Good if the customer is actively watching a setup screen in your UI. Stream progress updates to them as each job completes.

Pattern B: Asynchronous with webhook notifications. Good if onboarding can happen in the background. Register a webhook for events like content.generated, project.ready, social_account.connected, and handle them as they fire. See webhooks.

Production agents usually do both: poll for the first 60 seconds while the UI is open, then fall back to webhooks for anything still pending.

What failure looks like

Your agent creates 100 projects simultaneously, hits the rate limit, half of them fail partway. No retries, no idempotency keys, users stuck in half-onboarded state. Fix: use Idempotency-Key header on every mutation endpoint — Layers dedupes on that header for 24 hours, so retries are safe. Add a job queue on your side with exponential backoff. See idempotency and rate limits.

Loop 2: publish-to-learn

Runs on a daily cadence per customer. The sequence:

  1. Read recent performance metrics for the customer's project
  2. Identify top-performing organic posts (winners)
  3. Clone or amplify winners into new content
  4. Kill or deprioritize losers
  5. Schedule the new content through the customer's connected accounts

This loop is what publish to learn walks through in detail. The agent version:

// Daily cron per customer
async function runPublishToLearn(projectId: string) {
  // 1. Find this week's top performers
  const topRes = await fetch(
    `https://api.layers.com/v1/projects/${projectId}/top-performers?metric=engagement_rate&window=7d&limit=5`,
    { headers: { Authorization: `Bearer ${process.env.LAYERS_API_KEY}` } },
  );
  const { items: topPerformers } = await topRes.json();

  if (topPerformers.length === 0) {
    // Fresh project, no data yet. Skip this cycle.
    return { skipped: true, reason: "no_performance_data" };
  }

  // 2. Clone the best performer as 2 variants
  const newContainerId = crypto.randomUUID();
  const cloneRes = await fetch(
    `https://api.layers.com/v1/content/${newContainerId}/clone-from-post`,
    {
      method: "POST",
      headers: {
        Authorization: `Bearer ${process.env.LAYERS_API_KEY}`,
        "Idempotency-Key": crypto.randomUUID(),
        "Content-Type": "application/json",
      },
      body: JSON.stringify({
        projectId,
        sourcePlatformPostId: topPerformers[0].platformPostId,
        mode: "fork",
        variations: 2,
      }),
    },
  );
  const { jobId } = await cloneRes.json();
  const job = await waitForJob(jobId);

  // 3. The container is generated. Approval can auto-approve or queue for human review.
  //    That's configured on the project's approval policy.
  return { containerId: job.containerId, topPerformerId: topPerformers[0].platformPostId };
}

The critical design choice here is how the agent decides what to act on. Three ranking axes:

  • engagement_rate — for early-stage projects where absolute views are noisy
  • conversions — if the customer has SDK installed and is tracking conversion events
  • roas — for mature projects with ads running

Your agent should pick the axis based on the project's stage. Early-stage: engagement. Later: conversions or ROAS. The top-performers endpoint accepts any of these.

What failure looks like

Your agent blindly clones the top performer every day, even when the "top" post hasn't crossed a meaningful threshold. Within two weeks the content feed is variations on the same mediocre post. Fix: set a minimum threshold (e.g., only clone if top performer engagement_rate > 0.04), and cap clones-per-source at 4-5. See when clones stop helping.

Loop 3: optimization

The shortest loop, because most of it happens without your agent doing anything. Layers' paid-media optimizer runs nightly and makes decisions automatically: pausing underperforming ads, refreshing creative pools, scaling budgets incrementally.

Your agent's job in this loop is to:

  1. Read the decision log and surface material decisions to the customer
  2. Override when the customer disagrees with a specific call
  3. Inject fresh creative when the pool gets thin

Reading the decision log:

const metricsRes = await fetch(
  `https://api.layers.com/v1/projects/${projectId}/ads-metrics?` +
  `since=${weekAgo}&until=${now}&groupBy=ad&metrics=spend&metrics=conversions&metrics=cpa`,
  { headers: { Authorization: `Bearer ${process.env.LAYERS_API_KEY}` } },
);
const adMetrics = await metricsRes.json();

// Surface: ads spending heavily with no conversions (likely pauses are coming)
const worryAds = adMetrics.series.filter(
  (ad) => ad.spend > 50 && ad.conversions === 0
);

Overriding a decision:

// If the customer wants to keep a specific creative in rotation regardless of score:
await fetch(`https://api.layers.com/v1/projects/${projectId}/ads-content/${adsContentId}`, {
  method: "PATCH",
  headers: {
    Authorization: `Bearer ${process.env.LAYERS_API_KEY}`,
    "Content-Type": "application/json",
  },
  body: JSON.stringify({ override: "include" }),
});

See optimizer decisions for the full override model and creative selection for how the selection algorithm ranks the pool.

When to keep the agent out of optimization

Don't have your agent auto-override optimizer decisions without customer consent. The optimizer is built on data; your agent, even if LLM-powered, is reasoning about a sparse context of what it thinks the customer wants. The override column (include, exclude, null) is there for explicit human or agent intervention, not automatic countermanding of the scorer.

If you find your agent wants to override regularly, the tuning needs to happen on project config (CPA targets, fatigue thresholds) not on individual creatives. Talk to Layers support about config tuning if you're hitting this a lot.

Reference architecture

Here's how the three loops fit together in a production agent:

┌─────────────────────────────────────────────────────────┐
│                   Your product                          │
│                                                          │
│  ┌─────────────┐     ┌──────────────┐    ┌───────────┐ │
│  │  Customer   │     │  Onboarding  │    │ Dashboard │ │
│  │   signup    │───▶ │     loop     │    │ (progress)│ │
│  └─────────────┘     │  (one-time)  │    └───────────┘ │
│                      └──────┬───────┘                   │
│                             │                           │
│                             ▼                           │
│  ┌─────────────────────────────────────────────────┐   │
│  │            Background job runner                 │   │
│  │                                                   │   │
│  │  - Publish-to-learn loop (daily per customer)   │   │
│  │  - Optimization review (daily per customer)     │   │
│  │  - Webhook ingestion                             │   │
│  └────────────────┬────────────────────────────────┘   │
│                   │                                      │
└───────────────────┼──────────────────────────────────────┘


         ┌──────────────────────┐
         │  Layers Partner API  │
         │                      │
         │  - /v1/projects      │
         │  - /v1/content       │
         │  - /v1/metrics       │
         │  - /v1/top-performers│
         │  - /v1/ads           │
         │  - /v1/jobs          │
         └──────────┬───────────┘


         ┌──────────────────────┐
         │  Layers execution    │
         │                      │
         │  - Content gen       │
         │  - Distribution      │
         │  - Ads optimizer     │
         │  - UGC pipeline      │
         │  - SDK ingest        │
         └──────────────────────┘

Your product handles the UI and customer-facing logic. Layers handles the growth execution. The two communicate through the Partner API (outbound calls) and webhooks (inbound events).

Webhook events to subscribe to

These are the events most agents care about:

EventWhen it firesAgent reaction
project.readyOnboarding completeMark customer as "active" in your UI
content.generatedA content container finished generationSurface in your UI for customer to review, or auto-approve
content.approvedContent passed approvalSchedule for distribution
post.publishedA post went live on platformUpdate customer's timeline in your UI
ads.decisionOptimizer made a material decisionSurface in dashboard
lease_request.assignedLeased-account request fulfilledNotify customer, attach to project

Register these via the webhooks API. HMAC signing is handled for you — verify with the secret returned at registration.

When to use an LLM in the loop

Most of the code above isn't LLM-driven. It's plain API orchestration. Where does an LLM add value?

Good LLM uses:

  • Generating the initial project brief from a customer intake form
  • Writing the customer-facing explanation of what the optimizer just did
  • Deciding what to show a customer in their dashboard summary
  • Interpreting ambiguous customer feedback ("my last post was weird") into concrete API actions

Bad LLM uses:

  • Replacing the optimizer with LLM-based ad decisions (the optimizer is data-driven; LLMs hallucinate without data)
  • Generating content outside of Layers' content pipeline (lose the safety/approval/scoring layers)
  • Automatically deciding to release a leased account or pause ads without customer consent

The rule of thumb: use LLMs for language tasks (explaining, summarizing, interpreting), not for control tasks (acting on the customer's behalf without explicit consent). Layers already does the control tasks with the scoring pipeline, optimizer, and approval gate — your LLM doesn't need to recreate those.

Rate limits, idempotency, and safety

Three things your agent must handle correctly:

Rate limits. The API is rate-limited per key. On 429, back off exponentially (don't hammer). See rate limits.

Idempotency. Every mutation endpoint accepts an Idempotency-Key header. Retries on network failures won't create duplicate projects, duplicate content, or duplicate jobs. Use a UUID per logical operation. See idempotency.

Approval gates. Every project has an approval policy. Generated content doesn't auto-ship by default — it queues for review. For agent-driven flows, you can configure the project to auto-approve after a warm-up period (firstNPostsBlocked). But the approval gate is the only thing between your agent and a live post on the customer's audience. Don't defeat it lightly. See approval.

Monitoring your agent

An autonomous agent needs observability you can act on. Things to log and alert on:

  • Job failures per customer, per day. Spikes usually mean a systemic issue.
  • Customers with zero posts published in the last 14 days. Usually means stuck in approval queue or disconnected social account.
  • Customers with dropping organic_score averages. Early warning that content quality has drifted.
  • Optimizer decisions being frequently overridden. Indicates a config-tuning need.

If you have telemetry infrastructure for your own product, piping these to your dashboards is straightforward. Layers surfaces most of this via the metrics API, so the agent can fetch it directly.

The honest constraint

An AI agent on top of Layers is powerful, but it's not magic. Three things it can't do for you:

  1. Fix a customer's retention. If their product doesn't retain, Layers will acquire users who leave. See retention-driven growth for the upstream work.
  2. Replace creative taste. The approval gate exists because generated content needs a human (or a carefully-configured agent) in the loop. Fully-automated creative without taste checks drifts off-brand fast.
  3. Skip the learning phase. Every new customer's data needs 2-6 weeks to produce signal. Your agent can't rank top performers that don't exist yet.

Building an agent that respects these constraints — acknowledges retention issues, keeps humans in the loop for taste, paces expectations during early-stage — is what distinguishes an agent that helps customers from one that makes noise.

What's next

On this page