Layers

AI / LLM Data Handling

What goes to each model provider. Transparency.

View as Markdown

Layers uses LLMs and image/video models for:

  • Content generation (copy, image, video, scripting).
  • The ads-optimization agent.
  • App Machina's SDK-instrumentation coding agent.
  • UGC researcher + creative agents.

Providers

ProviderUsed for
Google Vertex AI / GeminiPrimary LLM for content generation, ads agent, scoring
Anthropic (Claude)Selected agent tasks
OpenAISelected agent tasks
ReplicateImage / video model hosting

Per-request only — we don't enroll in any training-data sharing program, and agent calls are made with the providers' zero-retention inference APIs where those are offered.

What gets sent

ContextWhat's in the prompt
Content generationBrand brief, voice, banned words, product info, reference links.
Ads agentCampaign config, current metrics, pool of eligible creatives.
App MachinaRepo files relevant to the task, user prompt, compile errors.
UGC researcherCreator allowlist, post content from SIFT, niche filters.

What does NOT get sent

  • End-user PII from SDK events.
  • Vault-stored tokens / credentials.
  • Anything not relevant to the specific task.

Each agent's system prompt scopes what it reads.

Transparency

AI-generated creative includes metadata tagging it as AI-generated. User-facing output is labeled where legally required (e.g., California SB 942, EU AI Act transparency obligations).

Human oversight

Every automated decision has:

  • Guardrails (min-age, min-spend, never-empty-adset — enforced by the optimizer).
  • Manual override (you can always intervene).
  • An audit trail in the partner audit log where the action is partner-visible.

See Safety & guardrails.

On this page