← Back
LangChain
LangSmith Fleet launches two agent authorization models: Assistants and Claws
· featureplatformrelease · blog.langchain.com ↗

Two Authorization Patterns

LangSmith Fleet now supports two fundamentally different approaches to agent authorization:

Assistants operate "on-behalf-of" their end user. When Alice uses an onboarding agent with access to Notion and Rippling, it authenticates as Alice and can only access information Alice herself has access to. Bob using the same agent sees only his own data. This pattern requires mapping end users (e.g., Slack user IDs) to their LangSmith credentials at runtime.

Claws use a fixed set of credentials controlled by the agent creator. Alice might create an agent and expose it via Slack or email; everyone who interacts with it uses the same authorization context. This is particularly useful when you want to grant agents limited, controlled access—for example, a dedicated Notion account with only the pages you want the agent to access.

Real-World Examples

The distinction becomes clear with concrete use cases:

  • Onboarding Agent (Assistant): Shares Slack and Notion access with end users in Slack; each user sees only their own accessible data.
  • Email Agent (Claw): Responds to incoming emails on your behalf, checking your calendar and sending responses using fixed credentials (with human-in-the-loop approval for sends).
  • Product Agent (Claw): Monitors competitors and answers roadmap questions using a dedicated Notion account, exposed via custom Slack bot.

Key Considerations

Channel Support: Assistants are currently available only in channels where user-to-credential mapping is supported. LangSmith supports Slack, Gmail, Outlook, and Teams, with more coming.

Safety & Permissions: When exposing Claws across channels to multiple users, consider human-in-the-loop guardrails to gate sensitive or potentially dangerous actions. Permission controls allow you to specify who can edit shared agents and their associated memory.

Future Enhancements: LangSmith plans to introduce user-specific memory to prevent Assistants from sharing sensitive information across different users (e.g., avoiding an agent from recalling Alice's data when chatting with Bob).