NPC Conversations With Trust Gates

Ground an NPC's knowledge in what they actually know, make them refuse to hallucinate, and something interesting happens: you can gate secrets behind trust. That opens up gameplay mechanics that dialogue trees can't touch — interrogation puzzles, social engineering, trust-building over multiple sessions. We're starting with social engineering in Netshell, our game set inside a living internet.

Working a Contact

Netshell is a world where NPCs post on forums, argue on IRC, and send each other emails. When you DM one, you're not browsing a dialogue tree — you're working a contact. The mechanic is conversational: ask the right questions, build rapport, notice when someone's deflecting, and figure out how to get past it.

An NPC named Kira might know exactly who planted the backdoor in sector 7. She retrieved that knowledge when you asked. But her trust gate blocked it from surfacing, so instead she said something like "I work here. That's all you need to know." That's not a scripted response — the pipeline retrieved the secret, checked the gate, and the NPC deflected naturally, in character.

How Trust Gates Work

Every NPC has secrets organized by trust level. There are five tiers: public (trust 0+, always available), probation (60+), trusted (75+), inner_circle (90+), and never_shared (hard block — the NPC will never reveal it, no matter what). Each secret is a markdown file in the NPC's knowledge base with a trustLevel and an optional revealTrigger condition.

When a player asks about gated information, the engine checks the player's current trust score against the required level. If it's high enough, the secret enters the NPC's context and they can reference it naturally. If not, the NPC deflects — but in character, never as "access denied." Kira doesn't say "insufficient trust level." She changes the subject, gets evasive, or tells you to mind your own business.

         Trust 0        Trust 60        Trust 75        Trust 90
            |               |               |               |
        [public]       [probation]       [trusted]     [inner_circle]
      "Yeah, I work    "The company     "I wrote the    "There's a
       here. Why?"     isn't what       firewall here.  backdoor in
                       it seems..."     All of it."     sector 7."
            |               |               |               |
            +---------------+---------------+---------------+
                           NPC: kira

On disk, this maps to a folder structure in the NPC's knowledge base. Each trust tier is a directory, and secrets are markdown files inside it:

kira/
├── backstory.md              # public — always accessible
├── technical_expertise.md    # public
└── secrets/
    ├── public/
    │   └── cover_story.md    # trust 0+
    ├── probation/
    │   └── company_truth.md  # trust 60+
    ├── trusted/
    │   └── wrote_firewall.md # trust 75+
    └── inner_circle/
        └── sector7_backdoor.md  # trust 90+

The engine also returns engagement states — engaged, deflecting, exhausted, uncomfortable — so the game layer can manage conversation flow. If Kira is deflecting, that's a signal to the player: you need a different angle, more trust, or to come back after something changes in the world.

Player asks -> [Gate Check] -> trust >= required?
                                  | yes: reveal secret
                                  | no:  NPC deflects in character
                                         engagement state: deflecting

Trust isn't binary, either. A player might have enough trust to learn that "the company isn't what it seems" (probation tier) but not enough to hear about the backdoor (inner_circle). The NPC navigates this naturally — they share what they're comfortable sharing and deflect on the rest, just like a real person would.

Ambient Life Feeds Conversation

The conversation pipeline doesn't exist in isolation. NPCs live ambient lives — Echo posts conspiracy rants on the forum, Maxwell sends dry technical emails, Cipher stays quiet because that's what Cipher does. When you DM Echo later, the pipeline retrieves that forum post from thirty minutes ago as citable knowledge in your conversation. The world isn't a static lore dump. It's a living record that NPCs draw from when they talk to you.

Where This Is Going

The engine is in beta and we're integrating it into Netshell now, where social engineering is the first real test of trust-gated conversation. But the architecture isn't specific to hacking games — the same trust gates and grounded retrieval could drive an interrogation trainer, a medical simulation, or interactive fiction where NPCs remember what you said three sessions ago and decide whether they trust you enough to say what they know.

Loreguard — character management, hosted pipeline, game integrations — is at loreguard.com.