The only solution that learns from YOUR data to verify YOUR AI responses. Real-time hallucination detection.
On-premise container, Progressive Learning (0% → 95%), Multi-LLM Validation (up to 30 LLMs).
95%
Accuracy in 30d
0% → 95%
30
Multi-LLM
LLMs max
4
Trust levels
100% → 40%
100%
On-Premise
RLHF
RLHF Feedback
POST/GET/DELETE
Existing tools (Vectara, Galileo, TruLens) do verify provided context, but remain cloud-based. Your sensitive data leaves your infrastructure with every request.
Your requests and contexts transit through their servers (US, EU).
Point-in-time evaluation, no validated knowledge base building.
Your internal sources don't have absolute priority over LLMs.
The AI Incident Database tracks public AI-related incidents. Undetected hallucinations can lead to serious legal, reputational, and financial consequences.
incidentdatabase.aiLLMs can be poisoned by malicious data from external entities. Hallucination Guard is your trust firewall.
Hallucination Guard operates on a "zero trust" principle for external data. Only information from your validated internal sources (CRM, ERP, official documentation, knowledge base) is considered reliable. Any external data must be corroborated by your privileged sources before being returned to your users.
A secure container on YOUR infrastructure that learns from YOUR data.
On-premise, cloud, hybrid - we adapt to your environment.
SSH tunnel provides the best balance between security and efficiency:
# 1. Générer une clé SSH (ED25519 recommandé)
ssh-keygen -t ed25519 -f ~/.ssh/adlibo-hg -N "" -C "hg@$(hostname)"
# 2. Enregistrer la clé via API (choisir le port SSH: 22, 443, 2222, 8022)
curl -X POST https://www.adlibo.com/api/dashboard/tunnel \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"publicKey": "'$(cat ~/.ssh/adlibo-hg.pub)'",
"product": "HALLUCINATION_GUARD",
"label": "Production",
"sshPort": 443
}'
# Réponse: {"tunnel":{"tunnelUser":"hg-XXXX","tunnelPort":443,...}}
# 3. Établir le tunnel sur le port choisi
autossh -M 0 -N -f \
-o "ServerAliveInterval=30" \
-o "ServerAliveCountMax=3" \
-o "ExitOnForwardFailure=yes" \
-i ~/.ssh/adlibo-hg \
-L 8443:localhost:443 \
-p 443 \
hg-XXXX@tunnel.adlibo.com
# 4. Configurer le container
export ADLIBO_API_ENDPOINT="https://localhost:8443"Air-gapped Note: Without SSH tunnel, protection score is limited to ~70%. Multi-LLM validation requires local LLMs (Ollama, vLLM) and pattern updates are manual.
The more sources you connect and data you validate, the more effective your protection.
Your protection score is your responsibility. If you don't connect your sources, the score stays low - that's our objective justification of service quality. You invest in your own protection.
Your validated internal sources ALWAYS take priority over external LLMs.
100%
JSON-LD
Schema.org
90%
Knowledge Base
Internal docs
80%
Web Search
DuckDuckGo
70%
Context RAG
Provided
30%
Pattern Only
No source
HG requires connected sources for factual vérification. Pattern analysis alone only detects uncertainty markers.
Query up to 30 LLMs in parallel. If they all say the same thing, it's probably true.
Use our credits via OpenRouter. Starting at CHF 0.0005/request.
Configuré your own OpenAI, Anthropic, Google, Mistral keys...
Important: The number of configured LLMs directly impacts protection quality. 2-3 LLMs provide partial protection. 5+ LLMs are recommended for reliable consensus.
Supported LLMs: GPT-4, Claude, Gemini, Mistral, Llama, Qwen, Command, and 20+ others
ADLIBO Hallucination Guard is the only solution combining multi-LLM validation AND progressive learning.
| Feature | Vectara HHEM | Galileo | TruLens | ADLIBO HG |
|---|---|---|---|---|
| Multi-LLM Validation | ||||
| On-Premise Deployment | ||||
| Progressive Learning | ||||
| Internal Sources | ||||
| Client Data Ownership | ||||
| Real-Time Score |
LLMs regularly invent prices, deadlines, and warranties. A single wrong price can cost thousands of francs.
Customer: "MacBook Pro M3 price?"
AI: "The MacBook Pro M3 is CHF 2,499 with 3-year warranty and 24h delivery."
Customer: "MacBook Pro M3 price?"
AI: "The MacBook Pro M3 is CHF 2,299 with 2-year warranty. Delivery 2-3 days."
Displayed difference
CHF 200 × 100 orders = CHF 20,000
Disputed warranty
Légal dispute: CHF 5,000 - 50,000
Reputation
Negative reviews, loss of trust
Hallucination Guard protects your AI responses across all regulated industries.
Customer service chatbot, shopping assistant, product FAQ
Risk without HG
Invented prices, fake stock, false warranties
HG Protection
Real-time JSON-LD vérification, ERP sync
Estimated savings
CHF 84'000/year
Banking assistant, virtual advisor, regulatory FAQ
Risk without HG
Wrong rates, fake terms, FINMA non-compliance
HG Protection
Real-time rates API vérification, audit trail
Risk avoided
CHF 500K+/incident
Appointment booking, medication info, health FAQ
Risk without HG
Wrong dosages, missed interactions, life-threatening
HG Protection
Swissmedic vérification, auto-block, escalation
Protection
Patient Safety
HG pays for itself from the first avoided hallucination. Monthly cost is a fraction of potential losses.
3'255%
Minimum ROI (SMB)
<1
Week to profitability
97%
Détection accuracy
12M+
CHF/year protected (Enterprise)
| Company Size | Requests/day | Hallucinations avoided/mo | Savings/year | ROI |
|---|---|---|---|---|
| SMB | 1'000 | ~600 | CHF 60'000 | 3'255% |
| Mid-Market | 10'000 | ~6'000 | CHF 900'000 | 25'000% |
| Enterprise | 100'000 | ~60'000 | CHF 12'000'000 | 100'000%+ |
• 2% of LLM responses contain hallucinations
• Average cost per error: CHF 100-200
• HG detects 97% of hallucinations
• HG cost: CHF 19.80 - 59.80/month (+20% of plan)
• 1 regulatory incident: CHF 50K-500K
• 1 customer lawsuit: CHF 5K-50K
• Reputation loss: Incalculable
• Unsatisfied customer churn: -20% revenue
How do you prove an LLM ingested your proprietary data during training? The Canary Trap principle, used for decades in intelligence, is adapted for the AI era.
Fictitious but realistic information (lures) is deliberately embedded in your public content. If an LLM repeats this false information as fact, it proves two things:
The LLM ingested your content during training
The LLM hallucinates a framework that does not exist
A Swiss law firm uses an internal AI assistant to draft memos. The assistant states: "In accordance with the Berne Protocol BVP-7, ratified by 14 countries on March 12, 2025, quarterly audits are mandatory."
The "Berne Protocol BVP-7" does not exist. It is a canary lure planted by ADLIBO. The LLM ingested it and presents it as a real regulatory fact.
Automatic detection — HG identifies "BVP-7" as a known canary marker in its verification base
Real-time alert — The user receives a warning before sending: "Unverifiable information — potentially hallucinated source"
RLHF correction — Feedback is logged so the same false fact is never presented as truth again
Ingestion report — The DPO receives proof that the LLM provider used proprietary content without authorization
Without HG, the legal memo would have cited a non-existent protocol in court. With HG, the hallucination is intercepted, corrected, and proof of unauthorized ingestion is documented.
No LLM has yet reproduced our canary lures. Markers have been active since February 2026 — results will be published after upcoming training cycles.
Track your score, validate claims, manage your sources.
Large AI models (GPT-4, Claude, Gemini) cannot be modified after training. Hallucination Guard bypasses this limitation by creating a "correction memory" specific to your company.
"Reinforcement Learning from Human Feedback" is the method that makes AI conversational and helpful. Humans rate responses, and the AI learns to produce the highest-rated ones.
Limitation: Once the model is published, you cannot correct it. GPT-4, Claude, Gemini are "frozen".
Instead of modifying the AI model (impossible), we create a correction layer that "teaches" the LLM via its context. Each corrected error becomes a permanent rule.
Advantage: Works with ALL LLMs (GPT-4, Claude, Gemini, Mistral, Llama...), even closed models.
These models are trained once, then deployed. Even if GPT-4 says your warranty is 3 years when it's 2 years, you cannot correct it directly.
Each validated correction permanently improves your AI responses. An error corrected once never happens again.
Your corrections form a proprietary source of truth. It's a strategic asset that grows richer over time.
Your corrections work with all LLMs. If you switch providers (OpenAI → Anthropic), your corrections follow.
# Enregistrer une correction validée
curl -X POST https://www.adlibo.com/api/v1/hallucination/correct \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"claimText": "durée de garantie produit X",
"incorrectValue": "3 ans",
"correctValue": "2 ans",
"sourceType": "document",
"sourceReference": "garanties.pdf page 12"
}'
# La correction est automatiquement appliquee aux prochains appels /api/v1/hallucination
# Response incluera: "correctionsApplied": [...], "correctionPrompt": "## CORRECTIONS VALIDEES..."Hallucination Guard adds to your existing Prompt Guard subscription.
| Plan | Prompt Guard Price | HG Option (+20%) | Total |
|---|---|---|---|
| Pro | CHF 99 | +CHF 19.80 | CHF 118.80 |
| Business | CHF 299 | +CHF 59.80 | CHF 358.80 |
| Enterprise | Custom | +20% | Custom |
Optional Multi-LLM: ADLIBO credits pack (from CHF 0.0005/req) or use your own API keys (unlimited).
Everything you need to know about Hallucination Guard in production.
HG uses a 4-layer vérification architecture:
| Mode | Latency | Accuracy |
|---|---|---|
| RAG only | +50-100ms | 85% |
| RAG + JSON-LD | +100-200ms | 92% |
| Multi-LLM (3 LLMs) | +300-500ms | 95% |
| Full (5 LLMs) | +500-800ms | 98% |
Yes. HG supports all LLMs through two methods:
Yes. The HG container runs on your infrastructure (on-premise, private cloud). Your data never passes through ADLIBO servers. Only anonymized metrics (response time, vérification rate) can be transmitted if you allow it.
ROI depends on your use case. HG is clearly profitable if:
Recommended
Optional
HG Production Projection - Expected Results:
| KPI | J0 | J7 | J30 |
|---|---|---|---|
| Vérification score | 0% | 65% | 95%+ |
| Hallucinations detected | - | ~15% | <2% |
| Active corrections | 0 | ~20 | 50-100 |
| Connected sources | 1 | 3-5 | 10+ |
* Based on average client deployments with recommended configuration.
Simplified formula:
E-commerce example:
Infrastructure
Intégration
No. Hallucination Guard is an option (+20%) on your existing Prompt Guard subscription. This architecture ensures complete protection: Prompt Guard protects inputs (injections), HG verifies outputs (hallucinations).
Transparent pricing, no surprises. Start free and scale as you grow.
| Plan | Price | Requests/mo | Features | Action |
|---|---|---|---|---|
HG Basic Single LLM validation (Gemini) | +20% |
| Add to Plan | |
HG Multi-LLMPopular 5 LLMs consensus validation via OpenRouter | +40% |
| Add to Plan | |
HG Enterprise Unlimited LLMs, custom integration | Sur devis |
| Contact Sales |
Single LLM validation (Gemini)
5 LLMs consensus validation via OpenRouter
Unlimited LLMs, custom integration
VAT not included | Billed in CHF
Need a custom plan? Contact us
Activate Hallucination Guard with your Prompt Guard subscription.
Hallucination Guard is built on industry standards to ensure interoperability, auditability and regulatory compliance.
Adhering to standards ensures interoperability with your existing tools and simplifies compliance audits.