Prompt Guard Option +20%
Hallucination Guard Documentation
Verify your AI responses against your own data. On-premise container, data never at ADLIBO.
Quick Start
1. Enable Hallucination Guard
In your ADLIBO dashboard, go to Settings → Hallucination Guard → Enable (+20% on your Prompt Guard subscription).
2. Deploy the Container
bash
# Pull the container
docker pull registry.adlibo.com/hallucination-guard:latest
# Run with your license key
docker run -d \
-e ADLIBO_LICENSE_KEY=your_license_key \
-e HG_ADLIBO_ENDPOINT=https://www.adlibo.com/api \
-p 8080:8080 \
registry.adlibo.com/hallucination-guard:latest3. Connect Your Sources
Connect your internal data sources to build your validated knowledge base.
bash
# Via API
curl -X POST http://localhost:8080/api/sources \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"type": "website",
"url": "https://docs.yourcompany.com",
"crawlDepth": 3
}'4. Verify a Response
bash
curl -X POST http://localhost:8080/api/verify \
-H "Content-Type: application/json" \
-d '{
"response": "Your LLM response to verify",
"context": "Optional: original user question"
}'Response:
json
{
"verified": true,
"score": 85,
"sources": [
{
"type": "internal",
"url": "https://docs.yourcompany.com/warranty",
"confidence": 100
}
],
"claims": [
{
"text": "warranty is 3 years",
"verified": true,
"source": "internal"
}
]
}Architecture
┌─────────────────────────────────────────────────────────────┐
│ VOTRE INFRASTRUCTURE │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ ┌─────────────────────────────────┐ │
│ │ Votre App │────▶│ ADLIBO HG Container │ │
│ │ (LLM) │◀────│ - Vérification réponses │ │
│ └──────────────┘ │ - Base de connaissances │ │
│ │ - Score de confiance │ │
│ ┌──────────────┐ └─────────────────────────────────┘ │
│ │ Vos Sources │ │ │
│ │ CRM, ERP, │────────────────────┘ │
│ │ Docs, Site │ Tunnel SSH (22/443/2222/8022) │
│ └──────────────┘ │ │
│ ▼ │
└──────────────────────────────────────│─────────────────────┘
│
┌──────────▼──────────┐
│ ADLIBO Cloud │
│ - License check │
│ - Pattern updates │
│ - Multi-LLM │
│ (pas de données) │
└─────────────────────┘Available SSH ports: 22 (standard), 443 (HTTPS), 2222, 8022. Choose the port when registering your key.
Trust Hierarchy
Validated internal source100%
ALWAYS priority. Overrides LLMs even if all say the opposite.
LLM Consensus 4/480%
LLM Consensus 3/460%
LLM Divergence40%
Important: 5+ LLMs are recommended for reliable consensus. 2-3 LLMs provide partial protection.
API Référence
POST /api/verify
Verify an LLM response against your sources.
json
{
"response": "string (required) - The LLM response to verify",
"context": "string (optional) - Original user question",
"options": {
"multiLlm": true, // Enable multi-LLM validation
"llmCount": 5, // Number of LLMs (2-30)
"includeSourceUrls": true // Include source URLs in response
}
}POST /api/sources
Add a data source.
json
{
"type": "website | api | file | database",
"url": "string - URL or connection string",
"crawlDepth": 3, // For websites
"refreshInterval": "daily | weekly | manual"
}GET /api/score
Get your current protection score.
json
{
"score": 75,
"sources": {
"connected": 12,
"validated": 8
},
"lastUpdate": "2026-01-12T10:00:00Z"
}Ready to protect your AI responses?
Enable Hallucination Guard in your dashboard.