>_ The Manifest

I asked our Zava Insurance HR Onboarding Buddy “What’s the dress code?” and it confidently described a business-casual policy, but Zava Insurance doesn’t have a dress code policy. Hallucination in enterprise agents isn’t random gibberish, it’s plausible-sounding nonsense, and behavior overrides are how you fix it. Continuing our Building Declarative Agents with M365 Agents Toolkit series, let’s add this production-readiness polish to our agent.

What Are Behavior Overrides?

Behavior overrides are a top-level object in your declarativeAgent.json manifest that control how your agent thinks: not what it knows or what it can do.

Two settings:

  1. special_instructions.discourage_model_knowledge: Controls whether the agent sticks strictly to your grounded sources instead of leaning on its general training data.
  2. suggestions.disabled: Controls whether Copilot generates follow-up suggestions after each response.

Prioritizing Knowledge Sources

When you set discourage_model_knowledge to true, you’re telling the agent: “Don’t use what you learned during training. Only use what I’ve given you: the embedded files, the SharePoint docs, the API plugins, the MCP servers.”

Without this flag, if a new hire asks “How many PTO days do I get?”, the model might blend your actual Employee Handbook with generic knowledge about average PTO policies. The response ends up mostly right but subtly wrong: correct day count from your document, but with rollover policies pulled from training data.

⚠️ Warning

Blended responses (where the model mixes grounded content with training data) are the hardest hallucinations to catch. They sound authoritative because they’re partially correct. Always test your agent with questions that are slightly outside your documents’ scope to see how it handles gaps.

With discourage_model_knowledge enabled, the same gap produces a better response:

“I couldn’t find a specific dress code policy in the Zava Insurance Employee Handbook. I’d recommend checking with your manager or HR Business Partner.”

The agent admits what it doesn’t know rather than filling gaps with generic knowledge.

💡 Tip

Think of discourage_model_knowledge as the difference between an employee who admits they don’t know the answer and one who confidently makes something up. In enterprise settings, the first one is always more trustworthy.

When to Enable It

Turn this on when:

  • Your agent handles domain-specific policies, procedures, or data (HR, legal, finance, compliance)
  • Accuracy matters more than breadth: users trust the agent’s answers to make decisions
  • Your knowledge sources are comprehensive enough to cover the agent’s scope
  • You’d rather the agent say “I don’t know” than guess

Leave it off when:

  • Your agent is a general-purpose assistant where broader knowledge is helpful
  • You want the model to supplement your documents with common-sense reasoning
  • Your knowledge sources are thin and you need the model to fill gaps (though you should probably add more sources instead)

Suggestions

The second behavior override controls suggestions: the contextual prompts Copilot generates after each response to help users continue the conversation.

By default, suggestions are enabled (disabled: false). After explaining the PTO policy, the agent might suggest “How do I request PTO?”, “What happens to unused PTO days?”, or “What’s the sick leave policy?” This solves the blank-page problem and guides users toward capabilities they didn’t know existed.

📝 Note

Follow-up suggestions are generated by Copilot based on conversation context, not defined by you in the manifest. They’re dynamic and contextual, which means they get better as the conversation progresses. Set suggestions.disabled to true only if you need to turn them off entirely.

You’d disable suggestions only in specific cases: maybe a tightly scoped workflow agent where you want users to follow a specific path, or an agent where the conversation should be strictly user-initiated. For our HR Onboarding Buddy, we’re keeping them on.

The Configuration

The complete behavior_overrides block in your declarativeAgent.json:

{
  "$schema": "https://developer.microsoft.com/json-schemas/copilot/declarative-agent/v1.6/schema.json",
  "version": "v1.6",
  "name": "HR Onboarding Buddy",
  "description": "A friendly assistant to help new employees at Zava Insurance.",
  "instructions": "$[file('instructions.txt')]",
  "disclaimer": {
    "text": "I'm an AI assistant: always verify critical HR information with your HR Business Partner."
  },
  "behavior_overrides": { 
    "suggestions": { 
      "disabled": false
    }, 
    "special_instructions": { 
      "discourage_model_knowledge": true
    } 
  } 
}

Suggestions stay enabled for discoverability, model knowledge is discouraged for accuracy.

When to Use Each Setting

A quick decision framework:

SettingSet to true WhenSet to false (or omit) When
special_instructions.discourage_model_knowledgeDomain-specific policies, regulated content, accuracy-critical scenariosGeneral-purpose assistants, broad knowledge needed
suggestions.disabledStrict workflow agents, user-initiated-only conversationsAlmost everything, keep suggestions on for discoverability

For most enterprise agents, the sweet spot is discourage model knowledge ON and suggestions NOT disabled. You want accuracy from your grounded sources and discoverability from dynamic suggestions.

The Value: Accuracy and Trust

I’ve seen enterprise pilots fail not because the technology was bad, but because one confidently wrong answer destroyed trust with the entire user base. Behavior overrides give you control over that trust equation:

  • Reduced hallucination: The agent won’t fill knowledge gaps with plausible-sounding training data
  • Honest uncertainty: When the agent doesn’t know something, it says so
  • Guided discovery: Suggestions help users explore without getting lost
  • Production readiness: The polish that separates a demo from a deployed agent

Combined with the disclaimer we added in the previous post, behavior overrides complete the trust picture. Your users know it’s AI, and the AI stays in its lane.

Resources

Have questions or want to share what you're building? Connect with me on LinkedIn or check out more on The Manifest.