What is LoreGuard
AI NPCs that stay grounded in your game's lore.
LoreGuard is an NPC dialogue system that generates responses using only information from your game bible. Every response is citation-verified against your source documents using NLI + LLM verification, significantly reducing hallucinations.
How it works
You upload your lore documents — world history, character backstories, item databases, location descriptions. When a player talks to an NPC, LoreGuard retrieves relevant information and generates dialogue that's verified against your sources.
Lore grounding
Unlike general-purpose AI chatbots, LoreGuard NPCs can only reference information you've provided. This means:
- Minimal hallucinations — NPCs are verified against your lore
- No contradictions — Dialogue stays consistent with your lore
- Full traceability — Every claim links back to a source document
- Debug before shipping — See exactly what informed each response
If an NPC doesn't know something, they'll say so — rather than making something up.
Pipeline Architecture
How LoreGuard reduces hallucinations through multi-pass verification.
The pipeline processes each player message through multiple verification stages. At each stage, the response is checked against your lore. If verification fails, it retries or strips the problematic content.
NPCs can only say things supported by evidence in your lore. Claims without citations are automatically rejected.
RAG retrieval
When a player sends a message, the pipeline retrieves relevant documents using semantic search.
- Embed the query — Convert player message to vector embedding
- Search the index — Find semantically similar lore chunks
- Rerank results — Cross-encoder reranks for relevance
- Return sources — Top-k chunks become "citable sources"
The NPC can only reference retrieved sources. If a fact isn't in context, the NPC won't know it.
NLI verification
After generation, every citation is verified using Natural Language Inference. The system checks whether the cited source actually supports the claim.
| Result | Meaning | Action |
|---|---|---|
| Entailed | Source supports claim | Approved |
| Neutral | Source doesn't mention this | Retry or strip |
| Contradiction | Source contradicts claim | Reject and retry |
The pipeline retries up to 3 times. If issues remain, fail-closed behavior strips unverifiable claims rather than shipping hallucinations.
Citations
Every response includes full provenance. Players see clean dialogue; you get traceability.
{
"speech": "The war ended 200 years ago when Queen Elara united the kingdoms.",
"citations": [
{"claim": "war ended 200 years ago", "source": "history.md:42"},
{"claim": "Queen Elara united the kingdoms", "source": "characters/elara.md:18"}
]
}
Citations let you debug NPC responses before shipping. If something's wrong, you know exactly which document to fix.
Fail-closed safety
When retries are exhausted and issues remain, the pipeline strips problematic claims rather than shipping hallucinations. The NPC might say less, but everything they say is grounded.
Fail-closed means the pipeline never delivers unverified claims. Silence is safer than hallucination.
Integrations
Platform support and deployment options.
LoreGuard integrates with your existing game infrastructure. Use our cloud API during development, then deploy offline for production.
Steam authentication
For multiplayer games, authenticate players using Steam session tickets. This lets you track per-player conversations and apply rate limits without exposing API keys in your game client.
The flow:
- Game client gets Steam session ticket from Steamworks SDK
- Client exchanges ticket for a LoreGuard player JWT
- LoreGuard validates the ticket with Steam's Web API
- Client uses the JWT for all subsequent NPC requests
Player JWTs are scoped to chat endpoints only and expire after 30 minutes. Your API keys never leave your server.
Local Inference CLI
Run LoreGuard on your own hardware for development and testing. The CLI connects your local LLM to the LoreGuard pipeline.
# Install
pip install loreguard-cli
# Run interactive wizard
loreguard
# Or headless mode with your worker token
loreguard --token lg_worker_xxx --model /path/to/model.gguf
# Test NPC chat (uses cloud inference)
loreguard --chat --token lg_worker_xxx
The CLI guides you through model selection and connects to the LoreGuard backend for NPC pipeline processing.
Game engines
LoreGuard works with any engine that can make HTTP requests. We provide example code for major platforms:
| Engine | Integration | Offline support |
|---|---|---|
| Unity | C# example + UnityWebRequest | Via loreguard-cli |
| Godot | GDScript example + HTTPRequest | Via loreguard-cli |
| Any Engine | REST API | Via loreguard-cli |
Cloud API works identically across all engines. For offline deployment, run loreguard-cli alongside your game.
Native engine plugins are on our roadmap. Currently, all engines integrate via REST API with example code provided in Python, C#, GDScript, and JavaScript.
Capabilities
What LoreGuard NPCs can do.
Offline deployment
Bundle LoreGuard with your game for offline play. No internet required, no per-player API costs, no latency concerns.
- One-time license — Pay once, ship to unlimited players
- Runs locally — Inference happens on player hardware
- Same quality — Identical pipeline to cloud version
- No data sent — Player conversations stay on their machine
The offline runtime is optimized for consumer hardware:
- RAM — 8GB minimum, 16GB recommended
- Storage — 6GB for model and runtime
- GPU — Optional but recommended for faster inference
Actual requirements depend on your lore size and NPC complexity.
Game state awareness
NPCs can reference current game state in their responses. Pass context about inventory, quests, relationships, and world events.
{
"message": "What should I do next?",
"game_state": {
"player.inventory": ["rusty_sword", "health_potion"],
"quest.dragon_slayer": "active",
"npc.relationship": "friendly",
"world.time": "night"
}
}
The NPC considers both your static lore and dynamic game state when generating responses. A merchant knows you're carrying gold; a guard knows it's past curfew.
Persistent memory
NPCs remember past conversations with each player. They recall what players told them, track relationship changes, and maintain continuity across sessions.
- Per-player history — Each player has their own conversation thread
- Configurable retention — Set how far back NPCs remember
- Relationship tracking — NPCs adjust tone based on history
- Cross-session — Memory persists between play sessions
Memory is still grounded in lore. An NPC might remember you asked about the war, but their answer is still citation-verified.