AI / LLM Data Handling
What goes to each model provider. Transparency.
Layers uses LLMs and image/video models for:
- Content generation (copy, image, video, scripting).
- The ads-optimization agent.
- App Machina's SDK-instrumentation coding agent.
- UGC researcher + creative agents.
Providers
| Provider | Used for |
|---|---|
| Google Vertex AI / Gemini | Primary LLM for content generation, ads agent, scoring |
| Anthropic (Claude) | Selected agent tasks |
| OpenAI | Selected agent tasks |
| Replicate | Image / video model hosting |
Per-request only — we don't enroll in any training-data sharing program, and agent calls are made with the providers' zero-retention inference APIs where those are offered.
What gets sent
| Context | What's in the prompt |
|---|---|
| Content generation | Brand brief, voice, banned words, product info, reference links. |
| Ads agent | Campaign config, current metrics, pool of eligible creatives. |
| App Machina | Repo files relevant to the task, user prompt, compile errors. |
| UGC researcher | Creator allowlist, post content from SIFT, niche filters. |
What does NOT get sent
- End-user PII from SDK events.
- Vault-stored tokens / credentials.
- Anything not relevant to the specific task.
Each agent's system prompt scopes what it reads.
Transparency
AI-generated creative includes metadata tagging it as AI-generated. User-facing output is labeled where legally required (e.g., California SB 942, EU AI Act transparency obligations).
Human oversight
Every automated decision has:
- Guardrails (min-age, min-spend, never-empty-adset — enforced by the optimizer).
- Manual override (you can always intervene).
- An audit trail in the partner audit log where the action is partner-visible.
See Safety & guardrails.