I asked an AI to build my AI agent.
Read that sentence again. Let it sit for a moment. Because that’s exactly what happened last Tuesday afternoon when I opened a terminal, fired up GitHub Copilot CLI, and said: “Create a declarative agent that helps our team manage support escalations.”
Twenty minutes later, I had a fully scaffolded project, a polished declarativeAgent.json, thoughtful instructions, an API plugin wired up, and a deployed agent running in Copilot. I didn’t author a single file manually. The coding agent did it all.
This is the meta workflow. And once you experience it, there’s no going back.
This workflow only became this smooth because of a secret ingredient: Agents Toolkit Skills. Before the full walkthrough, it’s worth understanding what they are and why they matter.
What Are Agents Toolkit Skills?
Skills are curated knowledge modules that plug into your coding agent. Think of them as specialized personas: when you install the Microsoft 365 Agents Toolkit plugin from the microsoft/work-iq marketplace, your coding agent gains deep expertise in the M365 Copilot extensibility surface. Schemas, best practices, CLI commands, deployment workflows, and more.
The Microsoft 365 Agents Toolkit plugin and its skills are currently in public preview. Expect rapid iteration, and open an issue if you hit a rough edge.
The plugin ships with three skills today:
| Skill | What It Does |
|---|---|
| declarative-agent-developer | End-to-end agent lifecycle: scaffolding, manifest authoring, capability configuration, API and MCP plugin integration, localization, and deployment |
| install-atk | Install or update the Agents Toolkit CLI and VS Code extension |
| ui-widget-developer | Build MCP servers with rich interactive widget rendering for Copilot Chat |
Each skill carries its own decision trees, validation rules, and safety guardrails so the agent does the right thing, and refuses to do the wrong thing.
Without skills, a coding agent is guessing at what a declarative agent manifest should look like based on training data. With skills, it knows. It knows the schema version, the folder conventions, the CLI flags, the deployment flow. That’s the difference between a plausible output and a correct one.
The skills enforce strict safety guardrails. They won’t silently create files in non-agent projects, won’t deploy broken manifests, and always follow a Detect, Inform, Ask protocol when they encounter issues. You stay in control.
The Coding Agent + Agents Toolkit Skills Workflow
What’s actually happening here sounds like magic, but it’s really just a well-structured loop.
A coding agent (GitHub Copilot in VS Code, GitHub Copilot CLI, or Claude Code) is a tool that reads your prompts, understands your project context, and generates code, configuration, and files directly in your workspace. When you combine that with the Microsoft 365 Agents Toolkit skills, the coding agent doesn’t just guess at what a declarative agent looks like. It knows the schema, the conventions, the file structure, and the best practices.
The workflow is a tight loop:
- Prompt - You describe what you want in natural language
- Scaffold - The coding agent generates the project structure using Agents Toolkit Skills
- Generate Manifest - It writes the
declarativeAgent.jsonwith correct schema references - Write Instructions - It crafts the agent’s behavior instructions based on your scenario
- Review - You (or another AI skill) review the output for quality
- Iterate - Refine the prompt, regenerate, repeat
Each cycle takes minutes, not hours. And because the coding agent retains context across the session, each iteration builds on the last. You’re not starting over. You’re refining.
Without skills, building a declarative agent looks like this:
- Read the docs to understand the manifest schema
- Manually scaffold the project structure
- Hand-author JSON manifests (hope you get the nesting right)
- Look up CLI commands for adding plugins
- Debug schema version mismatches
- Figure out the deployment flags
- Copy-paste the test URL
With skills, it collapses to this:
- “Scaffold a new agent for expense approvals”
- “Add an API plugin for our finance API”
- “Deploy it”
That’s it. The skill carries the schema knowledge, validates your manifest in real-time, refuses to deploy when errors exist, and always produces the test link. It’s the difference between reading about building agents and actually shipping them.
The key insight is that coding agents excel at generating structured configuration files, which is exactly what declarative agents are. You’re not asking the AI to write complex orchestration logic. You’re asking it to produce well-formed JSON, thoughtful instructions, and clean project scaffolding. That’s squarely in the sweet spot.
A Real Session: Empty Folder to Deployed Agent
Here’s an actual session, end to end. No theoretical handwaving. Real prompts, real output, real timeline.
Step 1: Scaffold the Project (~2 minutes)
I opened my terminal in an empty folder and started with a clear prompt:
The coding agent, powered by the declarative-agent-developer Agents Toolkit Skill, generated the full project tree:
escalation-helper/
├── appPackage/
│ ├── manifest.json
│ ├── declarativeAgent.json
│ └── instructions.txt
├── src/
│ └── agent.tsp
├── package.json
└── tsconfig.json
Everything in the right place. Correct folder conventions. Valid JSON from the start.
Step 2: Generate the Manifest (~3 minutes)
Next, I asked for something more specific:
The coding agent updated declarativeAgent.json with both capabilities, proper schema references, and even added a meaningful description. Here’s what it produced:
{
"$schema": "https://developer.microsoft.com/json-schemas/copilot/declarative-agent/v1.6/schema.json",
"version": "v1.6",
"name": "Escalation Helper",
"description": "Helps support teams manage escalation workflows using internal knowledge and web best practices.",
"instructions": "$[file('instructions.txt')]",
"capabilities": [
{ "name": "WebSearch" },
{
"name": "OneDriveAndSharePoint",
"items_by_url": [
{
"url": "https://contoso.sharepoint.com/sites/SupportKB"
}
]
}
]
}
No typos. No schema mismatches. Correct version reference. This would have taken me 10 minutes to write by hand (and I’d probably have misspelled something).
Step 3: Write the Instructions (~5 minutes)
This is where the magic really shows. Instructions are the soul of a declarative agent. They define its personality, its boundaries, how it responds, and what it refuses to do. Writing good instructions is hard. Writing them from scratch is even harder.
The coding agent produced a comprehensive instructions.txt with about 40 lines of clear, well-structured guidance:
# Escalation Helper - Agent Instructions
You are the Escalation Helper, an AI assistant that helps customer support teams
manage issue escalation workflows efficiently and consistently.
## Core Responsibilities
- Help agents determine when and how to escalate support issues
- Identify the correct escalation path based on issue type, severity, and SLA
- Draft professional escalation summaries for handoff
- Track open escalations and suggest follow-up actions
## Escalation Decision Criteria
- **Severity 1 (Critical)**: System outage affecting multiple customers → Escalate
immediately to Engineering On-Call
- **Severity 2 (High)**: Single customer blocked, no workaround → Escalate to
Senior Support within 2 hours
- **Severity 3 (Medium)**: Feature degradation with workaround → Track and escalate
if unresolved after 24 hours
- **Severity 4 (Low)**: Enhancement request or cosmetic issue → Log and review in
weekly triage
## Guardrails
- Never provide legal advice. If a customer mentions legal action, advise the agent
to loop in their manager immediately.
- For any "customer threatening to leave" scenario, always recommend involving a
manager before responding to the customer.
- Do not share internal severity classifications with external customers.
## Response Format
- Use bullet points for action items
- Include ticket ID and severity in every escalation summary
- Always end escalation drafts with a "Next Steps" section
I didn’t write any of that. The coding agent produced it based entirely on my prompt, using the skill’s knowledge of instruction best practices.
The quality of generated instructions depends heavily on how specific your prompt is. Vague prompts like “help with support stuff” produce generic instructions. Detailed prompts with scenarios, tone guidance, and edge cases produce instructions that actually work in production.
Step 4: Add an API Plugin (~5 minutes)
Our agent needed to talk to an internal ticketing system. Time to add an API plugin:
The coding agent generated both the OpenAPI specificationand the ai-plugin.json manifest, wired them into the declarative agent configuration, and even added proper parameter descriptions so Copilot would know when and how to call each endpoint.
Step 5: Deploy and Test (~5 minutes)
One natural-language prompt, and the skill handled the entire deployment pipeline: install, compile, provision, package, and return a clickable test link. A browser click later, there it was. A working agent. Understanding my questions. Searching the web. Referencing our SharePoint docs. Calling the ticketing API.
Twenty minutes. Empty folder to deployed agent. Zero files authored by hand.
Don’t skip the review step just because the output looks good. Coding agents can produce plausible-looking configurations that have subtle issues: wrong schema versions, overly permissive instructions, or API specs that don’t match your actual endpoints. Always verify before deploying to production.
Key Scenarios Beyond Scaffolding
The walkthrough above covers the “new agent from scratch” flow, but Agents Toolkit Skills unlock a bunch of other scenarios that come up constantly in real projects.
Localize Your Agent for Global Rollout
Multi-language support has historically been tedious. The skill produces tokenized manifests and language resource files, following the exact patterns the platform expects:
No manual string extraction.No guessing at file naming conventions. The skill handles it.
Deploy and Test in One Step
Every edit ends with a deployment and a clickable test link. No guessing whether your changes are live:
Build Rich UI Widgets for Copilot Chat
With the ui-widget-developer skill, you can build MCP servers that return structured HTML content, rendering interactive widgets directly inside Copilot Chat. Think dashboards, forms, and data visualizations, all served through the MCP protocol.
Wire Up MCP Servers
Need to connect your agent to an MCP server? Just describe it:
The skill knows the exact CLI commands,schema constraints, and version gates. It handles the plumbing; you focus on the scenario.
Prompt Engineering for Agent Code
Not all prompts are created equal. After dozens of these sessions, I’ve learned what separates a productive prompt from one that sends you in circles.
Be Specific About Schema Versions
Bad prompt:
Good prompt:
The version matters. The schema structure changes between versions, and a coding agent working from outdated training data might default to an older format. Call it out explicitly.
Include the Scenario, Not Just the Feature
Bad prompt:
Good prompt:
When the coding agent understands the why, it makes better decisions about the how. The instructions it generates will be more relevant. The capability configuration will be more precise.
Reference Agents Toolkit Conventions
Coding agents work best when you anchor them to specific conventions. The Agents Toolkit Skills provide this context automatically, but reinforcing it in your prompts helps ensure consistency.
Treat your prompts like you’d treat a design document. The more context and constraints you provide upfront, the less iteration you’ll need downstream. Spending 2 extra minutes on a detailed prompt saves 10 minutes of back-and-forth.
The Review Loop
Here’s something I learned the hard way: generating code fast is only valuable if the code is correct. Speed without quality is just fast failure.
That’s where review skills come in. After your coding agent generates the agent project, run it through a structured review covering three areas:
-
Technical Review - Manifest schema matches the declared version, capability configurations are valid, the OpenAPI spec aligns with the actual API, and instructions are clear and unambiguous.
-
Content Review - Instructions are free of hallucinated capabilities, tone is consistent across the prompt, and no instructions could cause unexpected behavior.
-
Security Review - The API plugin uses appropriate authentication, SharePoint scopes are scoped tightly, and the instructions are resilient to prompt injection.
You can use coding agents for the review too. Ask your coding agent to audit the generated files against the official schema documentation. It’s agents reviewing agents. Meta on meta.
This review loop (generate, review, refine)is what turns a 20-minute prototype into a production-ready agent.
Getting Started
Getting up and running takes about two minutes.
Prerequisites
- Node.js 18+: Download from nodejs.org
- GitHub Copilot (VS Code or CLI) or Claude Code
Step 1: Add the Work IQ Marketplace
/plugin marketplace add microsoft/work-iq
Step 2: Install the Plugin
/plugin install microsoft-365-agents-toolkit@work-iq
That’s your one-time setup. Restart your coding agent, and you’re ready.
Step 3: Build Your First Agent
That’s it. Three prompts. The skill walks you through naming, description, initial capabilities, produces a deployable project, wires up capabilities, and provisions to your tenant. You’ll have a working agent inside M365 Copilot in minutes.
For teams adopting coding agents in their daily workflow, this is a force multiplier. A developer who has never built a Copilot agent before can ship their first one in a single sitting, guided by the skill every step of the way.
The Work IQ plugin marketplace also includes the workiq plugin for querying Microsoft 365 data (emails, meetings, documents) and the workiq-productivity plugin for read-only productivity insights. Check out the full plugin catalog for details.
What’s Next
This workflow is evolving fast. Here’s where I see it heading:
- Multi-agent scaffolding - Prompt a coding agent to generate an entire fleet of agents with shared configurations and coordinated capabilities
- CI/CD integration - Coding agents generating GitHub Actions pipelines that validate, build, and deploy agents automatically
- Instruction testing - Automated test harnesses that verify agent behavior against expected outputs before deployment
- Skills ecosystem growth - As the Agents Toolkit Skills marketplace expands, coding agents will get even better at generating specialized agent configurations
- Deeper MCP integration - Tighter authoring-to-testing loops across the full agent lifecycle
The gap between “I have an idea for an agent” and “I have a deployed agent” is shrinking to minutes. And that changes everything about how we think about building AI solutions for the enterprise.
Start with something small. A team helper. A knowledge base assistant. A workflow automator. Install the Microsoft 365 Agents Toolkit plugin, fire up your coding agent, and scaffold it. Review the output. Deploy it. And watch what happens when AI builds AI.
It’s turtles all the way down.
If you want to dig deeper into the building blocks this workflow generates, check out scaffolding your first declarative agent, crafting effective agent instructions, API plugins for declarative agents, MCP servers for declarative agents, and localizing declarative agents.
Resources
- 📚 Microsoft 365 Agents Toolkit Fundamentals
- 📖 Declarative Agents for Microsoft 365 Copilot Overview
- 🔧 Build Declarative Agents with TypeSpec
- ✍️ Writing Effective Declarative Agent Instructions
- 🔌 API Plugins for Microsoft 365 Copilot
- 🔌 Build MCP Plugins for Copilot
- 📐 Declarative Agent Manifest Schema (v1.6)
- 🖥️ Agents Toolkit CLI Reference
- 🚀 Microsoft 365 Copilot Extensibility Overview
Have questions or want to share what you’re building? Connect with me on LinkedIn.
Have questions or want to share what you're building? Connect with me on LinkedIn or check out more on The Manifest.