If you don't see your question here, reach out.
A real-time monitoring tool that runs alongside your AI agent. It tracks per-turn cost, context accumulation, and tells you the exact moment when spawning a fresh session becomes cheaper than continuing. Single HTML file, runs locally, no account required.
The AEL framework applies to any provider with context caching. Currently supported in AEL products: Anthropic Claude, OpenAI GPT, and Grok (xAI). Gemini support is planned. OpenAI delivers the highest savings due to zero cache write cost. Grok 4 matches the standard 10:1 cache ratio — Grok 4.1 Fast uses a 4:1 ratio but AEL still applies.
Not for the Sidecar. It monitors externally and recommends spawn timing without touching your agent. The Dashboard reads session metadata from your existing log files — no changes to agent logic required.
AEL ChatReset is designed specifically for browser-based AI interfaces. The Dashboard works with API-connected agents (OpenClaw, LangChain, CrewAI, raw API, etc.).
OpenClaw, LangChain, CrewAI, OpenAI Agents SDK, and raw Anthropic, OpenAI, and Grok API adapters are all supported. If your framework writes session data to a log file, AEL can read it.
The headline numbers come from real sessions and parametric simulation. We're transparent about the assumptions — read the full methodology on the Research page. Real deployments typically see 2–4× improvement depending on session length, tool usage, and context accumulation rate.
V_shed is the cost of irrelevant tokens currently in your agent's context — what you're paying every turn to re-read content that's no longer useful. When V_shed exceeds S_real (the cost to spawn fresh), spawning saves money.
No. One-time purchase. No monthly fees, no recurring billing. You own it.
Enterprise licensing (per-seat, usage-based, or annual contract) is available for teams running multiple agents. Get in touch to discuss.