Senseway - Intelligent LLM Gateway
Route your prompts to the best LLM model. Automatically tokenize sensitive data via DataShield before sending. Rehydrate tokens in responses.
Overview
Smart routing
Automatic content analysis to select the best LLM model based on domain, complexity, and budget.
DataShield protection
Automatic PII tokenization (IBAN, email, phone, etc.) before sending to LLM. Transparent rehydration in response.
Sovereign & International
... models available including ... sovereign (Swiss + EU). Sovereign-only mode available.
Cloud & On-Premise
Cloud SaaS or On-Premise deployment (Podman/container) in your infrastructure for full control.
Architecture
Processing flow
User
Prompt
Prompt Guard
Injection analysis
DataShield
PII tokenization
Smart Router
Model selection
LLM
Tokenized data
Rehydration
Full response
Prompt Guard
Hybrid regex + semantic TF-IDF analysis. Injection detection, jailbreak, NSFW content. Score 0-100, auto-block if >85.
DataShield
Automatic personal data tokenization (3900+ DLP patterns for 17+ jurisdictions). Session vault for token consistency.
Smart Router
Automatic selection of the best model based on domain, complexity, budget, and historical performance.
Smart Routing
4 scoring axes
Complexity
40%Short simple messages to economy models. Code, legal analysis, medical to premium models.
Domain
20%Auto-detection: CODE, LEGAL, HEALTHCARE, FINANCE, etc. Each domain has optimal models.
Cost
20%4 tiers: economy (<0.001 CHF/1K), standard (<0.01), premium (>0.01). Organization budget compliance.
Satisfaction
20%User feedback (thumbs up/down). Matrix improves with usage (min. 50 requests per combination).
Auto (default)
Senseway picks the best model based on content, domain, and budget.
Sovereign only
Restricts routing to Swiss-hosted (Infomaniak) and EU-hosted (Mistral) models.
Manual
User or API explicitly chooses the model to use.
Session Stickiness
Once a model is selected for a conversation, it is kept for the entire session to maintain context coherence. Model switching is possible manually.
Predictive routing
Scoring matrix model x category x language, refreshed every 15 minutes from usage data. Weighted composite score: success rate (40%), latency (20%), cost (20%), satisfaction (20%). Activates after 50+ requests per combination.
... Supported LLM Models
| Provider | Region | Models | Context | Sovereign |
|---|---|---|---|---|
| Infomaniak | Switzerland | Mixtral, Mistral 3, Llama 3, Qwen 3, Granite, Gemma 3n | 128K | Yes |
| Mistral AI | France / EU | Mistral Large, Medium, Small, Nemo, Codestral | 128K-256K | Yes |
| OpenAI | International | GPT-5, GPT-4.1, GPT-4o, o3, o4-mini | 128K-200K | - |
| Anthropic | International | Claude Opus 4.6, Sonnet 4.6, Haiku 4.5 | 200K | - |
| International | Gemini 3 Pro/Flash, Gemini 2.5 Pro/Flash, Gemma 3 | 1M-2M | - | |
| DeepSeek | International | DeepSeek V3.2, R1 | 128K | - |
| xAI | International | Grok 4, Grok 3 | 128K-1M | - |
| Meta / OpenRouter | International | Llama 4 Maverick/Scout, Llama 3.3 70B | 128K-1M | - |
| Cohere | International | Command A, Command R | 128K | - |
| Perplexity | International | Sonar Pro, Sonar Reasoning Pro | 128K | - |
| Groq | International | Llama 3.3 70B, Gemma2 9B (ultra-fast) | 128K | - |
Model count updated dynamically from the database (refreshed every 15 min).
Data Protection
Protection pipeline
# Example: what the LLM receives
# Original prompt:
"Analyze the file of Jean Dupont, IBAN CH93 0076 2011 6238 5295 7"
# After DataShield (sent to LLM):
"Analyze the file of SWY_NAME_a7f3, IBAN SWY_IBAN_b2e1"
# LLM response (tokenized):
"The file of SWY_NAME_a7f3 with IBAN SWY_IBAN_b2e1 shows..."
# After rehydration (returned to user):
"The file of Jean Dupont with IBAN CH93 0076 2011 6238 5295 7 shows..."Automatic tokenization
3900+ DLP patterns detect PII across 17+ jurisdictions: IBAN, email, phone, AVS/AHV, SSN, etc.
Prompt Guard
Prompt injection detection, jailbreak, NSFW content. Score 0-100, auto-block if >85.
Session vault
Tokens stay consistent within a conversation: "Jean Dupont" = always SWY_NAME_a7f3 in the same session.
Encryption at rest
Conversations encrypted AES-256-GCM. Ephemeral session keys. Zero server-side knowledge.
Business Domains & Settings
9 business domains
Each domain provides boosted contextual agents (score +30), 3 system prompt presets (FR/EN/DE), and the ability to restrict LLM responses to the selected domain.
Interactive tokenization
Per-element PII toggle (enable/disable protection per item). Click-to-tokenize: select text, "Protect" button, custom token. System remembers user preferences.
Budget & Cost Control
Monthly CHF budget, hard limit (block), configurable alert (10-100%). Catalog access policy: catalog_only, open, block_all. Model whitelist/blacklist.
Context change detection
When domain shifts mid-conversation (e.g., HR to Finance), a modal offers: "Keep" documents or "Remove all".
Smart suggestions
Contextual input chips based on domain, uploaded documents, and conversation context.
Agent System
24 available agents: 17 PERSONAL + 7 PRO. Scoring: baseScore + domain bonus + file type bonus + content signal bonus. Threshold: relevanceScore >= 20 to appear.
+30 points for domain match
Expand: label + description + score + cost
Confirmation: Execute / Cancel
Integrations
Web Chat (B2B Dashboard)
Chat interface built into ADLIBO dashboard. Workspace with 15 contextual agents and dynamic scoring.
API REST
/api/v1/senseway/chat endpoint with SSE streaming, RBAC, auto-routing. OpenAI format compatible.
SDK (@adlibo/senseway-sdk)
TypeScript/JavaScript SDK published on npm.adlibo.com. ESM + CJS + types. 9 endpoints.
Endpoint Shield
Chrome/Firefox/Edge browser extension. Intercepts and protects prompts on 72+ LLM sites.
Transparent Cloud Proxy
Intercepts all outbound LLM traffic without code changes. PAC, ENV, DNS, or code.
On-Premise
Deployment in your infrastructure via Podman/container. Full data control.
Quick example (SDK)
import { Senseway } from '@adlibo/senseway-sdk';
const senseway = new Senseway({
apiKey: 'sw_live_xxxxx',
});
const response = await senseway.chat({
message: 'Analyse le dossier client avec IBAN CH93 0076 2011...',
autoSelect: true, // Smart routing
userRole: 'banker', // RBAC
});
console.log(response.text); // LLM response (rehydrated)
console.log(response.model); // Model used
console.log(response.tokensDetected); // PII tokenizedFull API reference
Detailed endpoint documentation, parameters, Python/TypeScript/cURL examples.
View Senseway API documentationSenseway Force
Senseway Force blocks all direct LLM access and forces traffic through the protected gateway. All LLM traffic from your organization is automatically tokenized. Available on Enterprise plan.
Full Senseway Force documentationCost Tiers
| Tier | Cost/1K tokens | Example models | Typical use |
|---|---|---|---|
| Economy | < CHF 0.001 | GPT-4o mini, Gemini Flash, DeepSeek V3, Llama 4, Mistral Small | Simple questions, lookups, quick chat |
| Standard | CHF 0.001 - 0.01 | GPT-5, Claude Sonnet 4.6, Gemini 2.5 Pro, Mistral Large | Analysis, writing, code, specialized domains |
| Premium | > CHF 0.01 | Claude Opus 4.6, GPT-5 Pro, o3-pro, o1 | Advanced reasoning, critical tasks |
Need help?
Our team can help you set up Senseway for your organization.