Trace, debug, and monitor every agent's decision. Built for developers to ship with confidence
LLM agents may not meet expectations. Visibility is your defense against failures and debugging. AIGNE Observability provides insights into every decision and tool run.
Follow your agent's real-time execution path. See every tool call, decision, and API request in an interactive graph
Monitor token consumption and latency per run. Identify expensive steps and performance issues before they reach production
No extra code required. Import the library and traces are captured automatically
Go from a failed run to the exact error in one click. View the specific logs, inputs, and context needed to debug
Share a direct link to any execution trace. Work with your team to find and fix bugs faster
Observability is enabled by default in the AIGNE Framework.
Simply run your agent as you normally would using the CLI. Traces are automatically collected in the background. Bash
Run Your Agent
1aigne run
When you're ready to review a trace, start the local observability server. Bash
Launcher the Dashboard
1aigne observe
Your browser will open a local dashboard where you can see a complete, step-by-step visual trace of your agent's execution
Tools and workflows for secure, auditable, and scalable AI development
Validate agent logic with objective data
Instantly trace issues through the visual graph and detailed logs
Catch logic flaws or performance issues
Monitor your agents in production with the exact same tools you use in development
Explore the AIGNE Observability on GitHub and start building more reliable, transparent AI agents today.