Govern AI agents on LangChain
Wrap LangChain agent tool calls with Cordum governance. The LangChain pack intercepts tool invocations, evaluates them against policy before execution, and logs every action for audit compliance.
What this pack does
- Pre-dispatch policy evaluation on tool calls
- Approval gates for high-risk tool invocations
- Output safety checks on tool results
- Full audit trail of agent reasoning and actions
Use cases
Require approval before LangChain agents execute database writes
Block agents from calling sensitive APIs without policy clearance
Audit all tool call decisions in production LangChain agents
Quick setup
- 1Install the LangChain pack: cordumctl pack install langchain
- 2Wrap your LangChain agent with the Cordum callback handler
- 3Define tool-level policy rules
- 4Enable the pack and run your agent with governance
Frequently asked questions
How does Cordum govern LangChain actions?
Cordum evaluates every LangChain action against your policy before execution. The Safety Kernel returns Allow, Deny, or Require Approval decisions, ensuring agents operate within approved boundaries.
Do I need to modify my existing LangChain setup?
No. The Cordum LangChain pack installs as an overlay. It intercepts agent actions at the governance layer without changing your existing LangChain configuration.
What happens if an agent action is denied?
The action is blocked before execution, logged in the audit trail, and optionally triggers an alert. The agent receives a structured denial with the policy reason, so it can adjust its approach.
Ready to govern LangChain?
Other integrations
Approval notifications and agent alerts in Slack channels.
Govern AI agent actions on GitHub repositories.
Govern AI agent actions across AWS services.
Governance for AI agents managing Jira workflows.
Govern AI agents responding to Kubernetes incidents.
Feed Datadog alerts into governed agent workflows.