The open-source Observability 2.0 database. One engine for metrics, logs, and traces — replacing Prometheus, Loki & ES.
-
Updated
Apr 24, 2026 - Rust
The open-source Observability 2.0 database. One engine for metrics, logs, and traces — replacing Prometheus, Loki & ES.
The open-source LLMOps platform: prompt playground, prompt management, LLM evaluation, and LLM observability all in one place.
Zero-code LLM security & observability proxy. Real-time prompt injection detection, PII scanning, and cost control for OpenAI-compatible APIs. Built in Rust.
🎙️ Voice-native document intelligence using Gemini, ElevenLabs STT/TTS, and Datadog observability — turning text documents into spoken conversations.
AICW Rankings is open-source tool for AI / GEO marketers to track mentions of brands by AI
OpenTelemetry wrapper for Claude Code CLI that logs tool calls, token usage, costs, and execution traces to Logfire, Sentry, Honeycomb, or Datadog. Drop-in replacement that swaps 'claude' command for 'claudia'.
Open Source Agent Alignment: Make your agents follow rules. One line of code to enforce, trace, and improve.
Real-time hardware and LLM inference monitoring — GPU, CPU, memory, and vLLM metrics streamed to a dashboard.
Reduce your OpenClaw agent costs. Free real-time LLM cost tracking + dashboard. Installs in 60 seconds.
Continuous LLM governance monitoring for regulated environments - EU AI Act, GDPR, ANSSI. Self-hosted, profile-driven, no data leaves your infrastructure.
End-to-end tracing and observability for OpenClaw multi-agent systems
Track what LLMs say before users choose your developer tool.
Create an evaluation framework for your LLM based app. Incorporate it into your test suite. Lay the monitoring foundation.
Real-time observability for Claude Code agents. Track conversations, tool calls, and token usage across all sessions - zero config.
Open-source AI visibility monitoring and analytics. Track how your brand appears across ChatGPT, Perplexity, Google AI Overviews, and other LLMs. BYOK. Self-hosted. Free alternative to Profound and Peec AI.
A curated list of the best AgentOps tools for 2026 — observability, tracing, evaluation, cost monitoring, and guardrails for LLM agents. Covering open-source and SaaS tools with feature benchmarks and architecture guidance.
Where did your tokens go? Spans, latency percentiles, alerts.
Lightweight behavioral regression testing for LLMs. Companion evaluation tool for Constitutional Identity Training (CIT) research.
AI model health monitor for LLM apps – runtime checks for drift, hallucination risk, latency, and JSON/format quality on any OpenAI, Anthropic, or local client.
Real-time monitoring dashboard for AI coding agent sessions (Claude Code + Codex CLI)
Add a description, image, and links to the llm-monitoring topic page so that developers can more easily learn about it.
To associate your repository with the llm-monitoring topic, visit your repo's landing page and select "manage topics."