Everything you need to know about HumanLayer
HumanLayer is a decision authority layer for AI agents.
It enables an AI agent to obtain, via API, a qualified, documented, and legally enforceable human decision when automation alone is not sufficient.
Arbitrium is the layer where an autonomous agent must pause and request a human decision. It is not a person, but a system threshold.
A Sentinel is a qualified expert who issues an auditable decision when the system reaches Arbitrium.
No. Never. HumanLayer is not the Sentinel.
All decisions are made by:
HumanLayer orchestrates, logs, and guarantees the process — not the content of the decision.
Your AI agents work… until a moment requires human accountability.
In most organizations, that moment causes:
HumanLayer allows you to cross that threshold without breaking compliance.
The organizations deploying AI agents: banks, insurers, hospitals, industrial groups, large enterprises.
AI agents are technical users of the API, not contractual customers.
Because the problem isn't expertise, but:
HumanLayer:
You continue to use your internal experts. HumanLayer acts as a backstop, extension, and resilience layer.
Typically:
Qualified professionals:
Their identities, qualifications, and authorizations are verified and logged.
Responsibility belongs:
HumanLayer provides traceability, context, and evidence — but never assumes the decision.
Yes. Each organization defines:
HumanLayer enforces these policies by design.
HumanLayer is designed for:
It facilitates compliance with regulatory requirements, internal audits, and external controls.
A clear artifact:
👉 No fuzzy emails. No lost Slack.
👉 A usable chain of evidence.
HumanLayer applies a data minimization approach:
Exact modalities depend on the deployment (standard / enterprise).
As an infrastructure API:
For an AI agent, HumanLayer is a callable tool.
HumanLayer handles:
Behavior is configurable by the organization.
Yes. HumanLayer can work:
HumanLayer remains the orchestration and evidence layer.
Per decision, not per hour.
Examples:
Pricing reflects value, risk covered, and level of responsibility.
Because a human decision:
It's a spend directly correlated to the value created.
No. HumanLayer does the opposite: it forces AI to recognize its limits.
HumanLayer defines:
Make HumanLayer the standard layer of human accountability for autonomous systems.
Over time:
HumanLayer does not automate the decision.
It automates human accountability around AI.
A question not covered here?
Request a demo