FAQ

Frequently Asked Questions

Everything you need to know about HumanLayer

General

HumanLayer is a decision authority layer for AI agents.

It enables an AI agent to obtain, via API, a qualified, documented, and legally enforceable human decision when automation alone is not sufficient.

Arbitrium is the layer where an autonomous agent must pause and request a human decision. It is not a person, but a system threshold.

A Sentinel is a qualified expert who issues an auditable decision when the system reaches Arbitrium.

No. Never. HumanLayer is not the Sentinel.

All decisions are made by:

  • an identified human
  • qualified and authorized
  • acting within a defined framework
  • with a complete trace of context and reasoning

HumanLayer orchestrates, logs, and guarantees the process — not the content of the decision.

Your AI agents work… until a moment requires human accountability.

In most organizations, that moment causes:

  • manual interruptions
  • out-of-system approvals
  • unpredictable delays
  • insufficient traceability

HumanLayer allows you to cross that threshold without breaking compliance.

Customers & usage

The organizations deploying AI agents: banks, insurers, hospitals, industrial groups, large enterprises.

AI agents are technical users of the API, not contractual customers.

Because the problem isn't expertise, but:

  • availability
  • prioritization
  • la traceability
  • the SLA
  • integration with AI

HumanLayer:

  • mobilizes Sentinels at the critical moment
  • under a guaranteed timeline
  • with a structured format
  • and audit-ready evidence

You continue to use your internal experts. HumanLayer acts as a backstop, extension, and resilience layer.

Typically:

  • outside business hours
  • during load spikes
  • for rare expertise
  • when an independent second opinion is required
  • when traceability is critical (audit, dispute, regulator)

Decision makers & accountability

Qualified professionals:

  • lawyers
  • credit analysts
  • physicians
  • compliance officers
  • domain experts

Their identities, qualifications, and authorizations are verified and logged.

Responsibility belongs:

  • to the Sentinel who renders the decision
  • within the applicable contractual and regulatory framework

HumanLayer provides traceability, context, and evidence — but never assumes the decision.

Yes. Each organization defines:

  • the types of authorized decisions
  • validation levels
  • escalation rules
  • auto-rejection cases

HumanLayer enforces these policies by design.

Compliance & audit

HumanLayer is designed for:

  • auditability
  • traceability
  • human accountability
  • separation of duties

It facilitates compliance with regulatory requirements, internal audits, and external controls.

A clear artifact:

  • which AI agent submitted the request
  • what context was provided
  • who decided
  • when
  • on what basis
  • with what justification

👉 No fuzzy emails. No lost Slack.
👉 A usable chain of evidence.

HumanLayer applies a data minimization approach:

  • only the elements necessary for the decision
  • restricted access
  • strict compartmentalization

Exact modalities depend on the deployment (standard / enterprise).

Technology & API

As an infrastructure API:

  • synchronous or asynchronous calls
  • webhooks
  • SDKs (Python, TypeScript)
  • integration with agent frameworks

For an AI agent, HumanLayer is a callable tool.

HumanLayer handles:

  • timeouts
  • escalations
  • fallbacks
  • default rejections

Behavior is configurable by the organization.

Yes. HumanLayer can work:

  • with internal Sentinels
  • with external Sentinels
  • or a mix of both

HumanLayer remains the orchestration and evidence layer.

Business & pricing

Per decision, not per hour.

Examples:

  • compliance validation
  • expert judgment
  • authorized authority

Pricing reflects value, risk covered, and level of responsibility.

Because a human decision:

  • avoids major risks
  • unblocks entire workflows
  • replaces costly friction

It's a spend directly correlated to the value created.

Vision & ethics

No. HumanLayer does the opposite: it forces AI to recognize its limits.

HumanLayer defines:

  • when AI can act on its own
  • and when it must stop

Make HumanLayer the standard layer of human accountability for autonomous systems.

Over time:

  • AI agents will know when to request validation
  • organizations will be able to automate without losing control
  • accountability will remain human, orchestrated by software

HumanLayer does not automate the decision.
It automates human accountability around AI.

A question not covered here?

Request a demo