Mid-demo, an HR leader asked: “How does the employee know they’re talking to AI and not a real HR person?” When your agent is too good, people forget it’s AI, and that’s a liability. Let’s add disclaimers to our Zava Insurance HR Onboarding Buddy to fix that.
Why Disclaimers Matter More Than You Think
Disclaimers in declarative agents aren’t buried at the bottom of a terms-of-service page. They’re displayed prominently every time a user opens the agent: right there, before the first message is typed. That placement serves two critical purposes.
Trust Through Transparency
Users interact differently with AI when they know it’s AI. This isn’t speculation: it’s well-documented in UX research. When people understand they’re talking to an AI assistant, they:
- Ask better questions: They’re more specific because they know the agent needs clear prompts
- Verify important answers: They double-check critical information instead of blindly accepting it
- Set appropriate expectations: They don’t expect the agent to “remember” things from last week or have opinions on office politics
- Report issues more readily: They’re more likely to flag wrong answers when they know it’s a system, not a person
For our Zava Insurance HR Buddy, this matters enormously. A new hire asking about their PTO balance? Fine: an AI answer is perfectly reasonable. But a new hire asking about a sensitive accommodation request? They need to know they should escalate that to a real human. A clear disclaimer sets that context from the very first interaction.
Compliance Isn’t Optional
Many organizations, especially in regulated industries like insurance, healthcare, and finance, have explicit policies requiring AI disclosure. Some jurisdictions are writing it into law. The EU AI Act, for example, mandates that users must be informed when they’re interacting with an AI system.
Having the disclaimer built into the agent manifest means it’s always there. It’s not a UI element that a developer might forget to add. It’s not a banner that gets accidentally removed during a redesign. It’s part of the agent’s identity, shipped with every deployment, enforced by the platform.
Disclaimers in declarative agents are displayed by the Microsoft 365 Copilot host: not rendered by your code. This means the presentation is consistent across Teams, the web, and other Copilot surfaces. You provide the text, Microsoft handles the UX.
The JSON Configuration
The configuration is a single property in your manifest:
"disclaimer": {
"text": "I'm an AI assistant: always verify critical HR information with your HR Business Partner."
}
One property, one string. Add this to your declarativeAgent.json manifest and every time a user opens the agent, they see this disclaimer before the conversation begins.
In the full Zava Insurance HR Onboarding Buddy manifest:
{
"$schema": "https://developer.microsoft.com/json-schemas/copilot/declarative-agent/v1.6/schema.json",
"version": "v1.6",
"name": "HR Onboarding Buddy",
"description": "Your friendly guide to everything Zava Insurance...",
"instructions": "You are a helpful HR onboarding assistant for Zava Insurance...",
"disclaimer": {
"text": "I'm an AI assistant: always verify critical HR information with your HR Business Partner."
},
// Omitted for brevity...
}
That’s the entire configuration: no custom UI or rendering logic needed.
Writing Effective Disclaimer Text
The configuration is trivial, but the disclaimer content matters. A bad disclaimer is noise: a good one is a trust signal.
Be Specific, Not Generic
❌ Generic: “This is an AI-powered tool.”
✅ Specific: “I’m an AI assistant: always verify critical HR information with your HR Business Partner.”
The generic version tells users nothing they don’t already suspect. The specific version tells them what to verify and who to contact. That’s actionable.
Name the Escalation Path
Your disclaimer should answer the question: “If the AI gets it wrong, who do I talk to?” For our HR agent, that’s the HR Business Partner. For an IT helpdesk agent, it might be the service desk. For a legal agent, it might be outside counsel.
Don’t leave users stranded with “verify with appropriate personnel.” Tell them exactly where to go.
Keep It Short
Disclaimers are displayed in limited UI space. You’re not writing a legal brief: you’re writing a one-liner that sets expectations. Aim for one sentence, two at most. If you need a full AI usage policy, link to it from your agent’s instructions or embedded knowledge: don’t stuff it into the disclaimer.
Match Your Agent’s Tone
If your agent is formal and corporate, the disclaimer should be too. If your agent is friendly and approachable: like our HR Buddy: the disclaimer can be warmer. Consistency matters.
Test your disclaimer with actual users before shipping. What sounds clear to you as the developer might be confusing to someone in accounting who’s never thought about AI safety. A quick review from someone outside your team can save you from a disclaimer that misses the mark entirely.
The Zava Insurance Scenario
A new employee at Zava Insurance opens the HR Onboarding Buddy for the first time. Before they type a single word, they see:
“I’m an AI assistant: always verify critical HR information with your HR Business Partner.”
Now they ask: “How many PTO days do I get in my first year?”
The agent pulls from the SharePoint-grounded Employee Handbook and responds with a cited answer: 15 days for employees in their first year, prorated based on start date, with a link to the source document.
The user reads the answer, sees the citation, and remembers the disclaimer. They know this came from a document, they know it’s AI-generated, and they know they can verify with their HR Business Partner if it doesn’t feel right. Three layers of trust working together:
- The disclaimer: sets the expectation that this is AI
- The citation: shows where the information comes from
- The escalation path: tells them who to contact if they need a human
That’s responsible AI deployment.
A disclaimer does not replace proper grounding and instruction design. If your agent regularly hallucinates or gives incorrect answers, a disclaimer won’t save you: it will just become the thing users point to when they file complaints. Fix the grounding first, then add the disclaimer as a finishing touch.
The Value: Trust, Transparency, and a Better Experience
A disclaimer gives you:
- Trust: Users who know they’re talking to AI are more trusting, not less. Transparency breeds confidence.
- Transparency: Your organization can demonstrate to auditors, regulators, and leadership that AI interactions are clearly disclosed.
- Better conversations: When users understand the agent’s nature, they ask clearer questions, provide more context, and use the agent for what it’s good at.
- Reduced liability: If an agent gives incorrect information about a benefits calculation and the user makes a financial decision based on it, having a clear disclaimer that says “verify with HR” is materially different from having no disclaimer at all.
Resources
Have questions or want to share what you're building? Connect with me on LinkedIn or check out more on The Manifest.