Anti-hallucination research mode for Claude Code. Toggle on/off to enforce citation requirements and source grounding.
-
Updated
Apr 16, 2026
Anti-hallucination research mode for Claude Code. Toggle on/off to enforce citation requirements and source grounding.
Score any document. Prove every claim.
Hierarchical RAG architecture scaling to 693K chunks on consumer hardware (4GB VRAM). Features 3-address routing, hybrid vector+graph fusion, and SetFit classification.
Persistent memory MCP server for AI agents — Rust, 19 tools, knowledge graph, Hebbian learning, episodic memory, contradiction detection, prospective triggers, Bayesian calibration, zero-config Docker setup.
Reliable research infrastructure for AI agents. Evidence-backed web search with citations, confidence scores, and Clarity anti-hallucination. MCP server, REST API, Python SDK.
A minimal graph engine for grounded AI — records, associates, and retrieves, but never invents. Written in Rust.
65 plugins that turn Claude Code into an autonomous development team. 24 agents, 34 skills, 5 hooks. Includes 12-plugin anti-hallucination suite. One-line install.
EN: An overfitted SD prompt engine with severe "aesthetic snobbery," forcibly transforming mundane ideas into professional-grade physical rendering instructions. CN: 一个具备“审美洁癖”的过拟合提示词引擎,强行将平庸构思纠偏为具备极致物理质感的工业级渲染指令。
A strict, deterministic LLM protocol for loading, reading and activating the DCQN.MATRIX Axiomatics from the OSF DOI (10.17605/OSF.IO/QWA6S), including anti-simulation safeguards and full formal reconstruction into DCQN_LOGIK_SESSION_V1.
Native rules, hooks, and guards that prevent Claude Code and Codex from hallucinating code, duplicating files, or shipping unverified changes.
Context governance kernel for LLM agents. Predicts entropy, blocks hallucinations. Pairs with opencode-dcp. llm, llm-agents, opencode, opencode-plugin, context-governance, anti-hallucination, claude-code, cursor, typescript.
security, high accuracy sql agent mcp, include admin panel.
🔍 AI驱动的商业文章素材采集助手 | 交互提问·反幻觉验证·Word导出 | 专为记者/分析师/内容创作者设计
The Anti-Hallucination data layer for B2B Sourcing. Deep-verified global supply chain entities designed for RAG and LLM instruction tuning.
🌀 Verifiable multi-agent AI with shadow auditing that transforms uncertain scenarios into transparent, confidence-scored strategic decisions.
Claude Code plugin for vulnerability research. Six-gate ladder (A -> B -> B.5 -> C -> C.5 -> D) enforced by hooks; 17 deterministic Python skill modules; hands-off intent-consumer loop. AGPL-3.0.
Evaluation patterns, release gates, and anti-hallucination techniques for developer-focused AI workflows.
I built a production-style RAG system focused on grounded generation, not open-ended LLM output. Design priorities: retrieval quality, validation, and measurable confidence not just document chat.
RAG-powered LLM assistant for HR policy Q&A with ChromaDB, guardrails, citation tracking, and evaluation framework. FastAPI + Streamlit.
openclaw-Grounding Guard:有效抑制 OpenClaw 幻觉率(自动注入可追溯上下文 + 出站标注审计告警)
Add a description, image, and links to the anti-hallucination topic page so that developers can more easily learn about it.
To associate your repository with the anti-hallucination topic, visit your repo's landing page and select "manage topics."