>_ The Manifest

I asked an AI to build my AI agent.

Read that sentence again. Let it sit for a moment. Because that’s exactly what happened last Tuesday afternoon when I opened a terminal, fired up GitHub Copilot CLI, and said: “Create a declarative agent that helps our team manage support escalations.”

Twenty minutes later, I had a fully scaffolded project, a polished declarativeAgent.json, thoughtful instructions, an API plugin wired up, and a deployed agent running in Copilot. I didn’t author a single file manually. The coding agent did it all.

This is the meta workflow. And once you experience it, there’s no going back.

This workflow only became this smooth because of a secret ingredient: Agents Toolkit Skills. Before the full walkthrough, it’s worth understanding what they are and why they matter.

What Are Agents Toolkit Skills?

Skills are curated knowledge modules that plug into your coding agent. Think of them as specialized personas: when you install the Microsoft 365 Agents Toolkit plugin from the microsoft/work-iq marketplace, your coding agent gains deep expertise in the M365 Copilot extensibility surface. Schemas, best practices, CLI commands, deployment workflows, and more.

📝 Note

The Microsoft 365 Agents Toolkit plugin and its skills are currently in public preview. Expect rapid iteration, and open an issue if you hit a rough edge.

The plugin ships with three skills today:

SkillWhat It Does
declarative-agent-developerEnd-to-end agent lifecycle: scaffolding, manifest authoring, capability configuration, API and MCP plugin integration, localization, and deployment
install-atkInstall or update the Agents Toolkit CLI and VS Code extension
ui-widget-developerBuild MCP servers with rich interactive widget rendering for Copilot Chat

Each skill carries its own decision trees, validation rules, and safety guardrails so the agent does the right thing, and refuses to do the wrong thing.

Without skills, a coding agent is guessing at what a declarative agent manifest should look like based on training data. With skills, it knows. It knows the schema version, the folder conventions, the CLI flags, the deployment flow. That’s the difference between a plausible output and a correct one.

💡 Tip

The skills enforce strict safety guardrails. They won’t silently create files in non-agent projects, won’t deploy broken manifests, and always follow a Detect, Inform, Ask protocol when they encounter issues. You stay in control.

The Coding Agent + Agents Toolkit Skills Workflow

What’s actually happening here sounds like magic, but it’s really just a well-structured loop.

A coding agent (GitHub Copilot in VS Code, GitHub Copilot CLI, or Claude Code) is a tool that reads your prompts, understands your project context, and generates code, configuration, and files directly in your workspace. When you combine that with the Microsoft 365 Agents Toolkit skills, the coding agent doesn’t just guess at what a declarative agent looks like. It knows the schema, the conventions, the file structure, and the best practices.

The workflow is a tight loop:

  1. Prompt - You describe what you want in natural language
  2. Scaffold - The coding agent generates the project structure using Agents Toolkit Skills
  3. Generate Manifest - It writes the declarativeAgent.json with correct schema references
  4. Write Instructions - It crafts the agent’s behavior instructions based on your scenario
  5. Review - You (or another AI skill) review the output for quality
  6. Iterate - Refine the prompt, regenerate, repeat

Each cycle takes minutes, not hours. And because the coding agent retains context across the session, each iteration builds on the last. You’re not starting over. You’re refining.

Without skills, building a declarative agent looks like this:

  1. Read the docs to understand the manifest schema
  2. Manually scaffold the project structure
  3. Hand-author JSON manifests (hope you get the nesting right)
  4. Look up CLI commands for adding plugins
  5. Debug schema version mismatches
  6. Figure out the deployment flags
  7. Copy-paste the test URL

With skills, it collapses to this:

  1. “Scaffold a new agent for expense approvals”
  2. “Add an API plugin for our finance API”
  3. “Deploy it”

That’s it. The skill carries the schema knowledge, validates your manifest in real-time, refuses to deploy when errors exist, and always produces the test link. It’s the difference between reading about building agents and actually shipping them.

💡 Tip

The key insight is that coding agents excel at generating structured configuration files, which is exactly what declarative agents are. You’re not asking the AI to write complex orchestration logic. You’re asking it to produce well-formed JSON, thoughtful instructions, and clean project scaffolding. That’s squarely in the sweet spot.

A Real Session: Empty Folder to Deployed Agent

Here’s an actual session, end to end. No theoretical handwaving. Real prompts, real output, real timeline.

Step 1: Scaffold the Project (~2 minutes)

I opened my terminal in an empty folder and started with a clear prompt:

GitHub Copilot

Create a new declarative agent project for M365 Copilot using the Agents Toolkit. The agent should help customer support teams manage escalation workflows. Name it “Escalation Helper”. Use the latest Agents Toolkit project structure.

Reading project structure and Agents Toolkit skill references…

skill(declarative-agent-developer)

Create appPackage/manifest.json +42 -0

Create appPackage/declarativeAgent.json +18 -0

Create appPackage/instructions.txt +1 -0

Create package.json +12 -0

Scaffolded Escalation Helper with manifest v1.6, declarative agent config, and placeholder instructions. Project is ready for capability wiring.

The coding agent, powered by the declarative-agent-developer Agents Toolkit Skill, generated the full project tree:

escalation-helper/
├── appPackage/
│   ├── manifest.json
│   ├── declarativeAgent.json
│   └── instructions.txt
├── src/
│   └── agent.tsp
├── package.json
└── tsconfig.json

Everything in the right place. Correct folder conventions. Valid JSON from the start.

Step 2: Generate the Manifest (~3 minutes)

Next, I asked for something more specific:

GitHub Copilot

Update the declarativeAgent.json to include WebSearch and OneDriveAndSharePoint capabilities. The agent should be able to search the web for escalation best practices and also access our internal SharePoint knowledge base.

Read appPackage/declarativeAgent.json (18 lines)

skill(declarative-agent-developer)

Validating capability names against schema v1.6…

Edit appPackage/declarativeAgent.json +12 -1

Added WebSearch and OneDriveAndSharePoint to the capabilities array. Wired items_by_url pointing to your SharePoint Support Knowledge Base site. Schema validation passed.

The coding agent updated declarativeAgent.json with both capabilities, proper schema references, and even added a meaningful description. Here’s what it produced:

{
  "$schema": "https://developer.microsoft.com/json-schemas/copilot/declarative-agent/v1.6/schema.json",
  "version": "v1.6",
  "name": "Escalation Helper",
  "description": "Helps support teams manage escalation workflows using internal knowledge and web best practices.",
  "instructions": "$[file('instructions.txt')]",
  "capabilities": [
    { "name": "WebSearch" },
    {
      "name": "OneDriveAndSharePoint",
      "items_by_url": [
        {
          "url": "https://contoso.sharepoint.com/sites/SupportKB"
        }
      ]
    }
  ]
}

No typos. No schema mismatches. Correct version reference. This would have taken me 10 minutes to write by hand (and I’d probably have misspelled something).

Step 3: Write the Instructions (~5 minutes)

This is where the magic really shows. Instructions are the soul of a declarative agent. They define its personality, its boundaries, how it responds, and what it refuses to do. Writing good instructions is hard. Writing them from scratch is even harder.

GitHub Copilot

Write detailed instructions for the Escalation Helper agent. It should help support teams with: - Identifying when to escalate issues, - Finding the right escalation path based on issue type, - Drafting escalation summaries, - Tracking follow-ups. Tone should be professional but supportive. Never provide legal advice. Always suggest involving a manager for customer-threatening-to-leave scenarios.

Read appPackage/declarativeAgent.json to check current instructions reference…

skill(declarative-agent-developer)

Planning instruction structure: persona definition, escalation decision tree, severity matrix, guardrails for legal and at-risk scenarios, response formatting rules.

Edit appPackage/instructions.txt +38 -1

Wrote 38 lines of structured instructions covering core responsibilities, severity-based escalation criteria (Sev1 through Sev4), manager-loop guardrails for at-risk customers, legal advice refusal patterns, and response formatting conventions.

The coding agent produced a comprehensive instructions.txt with about 40 lines of clear, well-structured guidance:

# Escalation Helper - Agent Instructions

You are the Escalation Helper, an AI assistant that helps customer support teams
manage issue escalation workflows efficiently and consistently.

## Core Responsibilities
- Help agents determine when and how to escalate support issues
- Identify the correct escalation path based on issue type, severity, and SLA
- Draft professional escalation summaries for handoff
- Track open escalations and suggest follow-up actions

## Escalation Decision Criteria
- **Severity 1 (Critical)**: System outage affecting multiple customers → Escalate
  immediately to Engineering On-Call
- **Severity 2 (High)**: Single customer blocked, no workaround → Escalate to
  Senior Support within 2 hours
- **Severity 3 (Medium)**: Feature degradation with workaround → Track and escalate
  if unresolved after 24 hours
- **Severity 4 (Low)**: Enhancement request or cosmetic issue → Log and review in
  weekly triage

## Guardrails
- Never provide legal advice. If a customer mentions legal action, advise the agent
  to loop in their manager immediately.
- For any "customer threatening to leave" scenario, always recommend involving a
  manager before responding to the customer.
- Do not share internal severity classifications with external customers.

## Response Format
- Use bullet points for action items
- Include ticket ID and severity in every escalation summary
- Always end escalation drafts with a "Next Steps" section

I didn’t write any of that. The coding agent produced it based entirely on my prompt, using the skill’s knowledge of instruction best practices.

📝 Note

The quality of generated instructions depends heavily on how specific your prompt is. Vague prompts like “help with support stuff” produce generic instructions. Detailed prompts with scenarios, tone guidance, and edge cases produce instructions that actually work in production.

Step 4: Add an API Plugin (~5 minutes)

Our agent needed to talk to an internal ticketing system. Time to add an API plugin:

GitHub Copilot

Add an API plugin that connects to our internal ticketing REST API at https://api.contoso.com/tickets. It should support: - GET /tickets/{id} to retrieve ticket details, - POST /tickets/{id}/escalate to trigger an escalation, - GET /tickets?status=open&priority=high to list urgent tickets. Use OAuth with OAuthPluginVault for authentication. Generate the OpenAPI spec and the plugin manifest.

Read appPackage/declarativeAgent.json (30 lines)

skill(declarative-agent-developer)

Analyzing endpoint patterns for OpenAPI spec generation…

Create appPackage/apiPlugin.json +54 -0

Create appPackage/openapi.yaml +87 -0

Edit appPackage/declarativeAgent.json +8 -0

Generated OpenAPI 3.0 spec with three operations (getTicket, escalateTicket, listUrgentTickets). Created apiPlugin.json at schema_version v2.4 with OAuthPluginVault auth and rich description_for_model strings. Wired the plugin reference into declarativeAgent.json.

The coding agent generated both the OpenAPI specificationand the ai-plugin.json manifest, wired them into the declarative agent configuration, and even added proper parameter descriptions so Copilot would know when and how to call each endpoint.

Step 5: Deploy and Test (~5 minutes)

GitHub Copilot

Deploy my agent to my dev tenant so I can test it.

skill(declarative-agent-developer)

Running atk provision --env dev

Installing dependencies… done.

Building app package… 3 files bundled.

Uploading to dev tenant… registered app ID a]1b2c3d4-e5f6-7890-abcd-ef1234567890.

✅ Agent deployed successfully!

🚀 Test your agent in M365 Copilot: https://m365.cloud.microsoft/chat/?titleId=T_d4f8b2a1-6e93-4c07-b5d1-3a9e7f1c8b42

One natural-language prompt, and the skill handled the entire deployment pipeline: install, compile, provision, package, and return a clickable test link. A browser click later, there it was. A working agent. Understanding my questions. Searching the web. Referencing our SharePoint docs. Calling the ticketing API.

Twenty minutes. Empty folder to deployed agent. Zero files authored by hand.

⚠️ Warning

Don’t skip the review step just because the output looks good. Coding agents can produce plausible-looking configurations that have subtle issues: wrong schema versions, overly permissive instructions, or API specs that don’t match your actual endpoints. Always verify before deploying to production.

Key Scenarios Beyond Scaffolding

The walkthrough above covers the “new agent from scratch” flow, but Agents Toolkit Skills unlock a bunch of other scenarios that come up constantly in real projects.

Localize Your Agent for Global Rollout

Multi-language support has historically been tedious. The skill produces tokenized manifests and language resource files, following the exact patterns the platform expects:

GitHub Copilot

Localize my agent into French and Japanese.

Read appPackage/manifest.json, declarativeAgent.json…

skill(declarative-agent-developer)

Scanning for localizable strings: found 4 in manifest.json, 2 in declarativeAgent.json.

Create appPackage/fr.json +6 -0

Create appPackage/ja.json +6 -0

Edit appPackage/manifest.json +8 -0

Extracted 6 localizable strings, generated fr.json and ja.json resource files with translations, and wired localizationInfo into manifest.json. M365 Copilot will now serve the correct language per user locale.

No manual string extraction.No guessing at file naming conventions. The skill handles it.

Deploy and Test in One Step

Every edit ends with a deployment and a clickable test link. No guessing whether your changes are live:

GitHub Copilot

Deploy my agent.

skill(declarative-agent-developer)

Running atk provision --env dev

Packaging app… done. Uploading… deployed.

✅ Agent deployed successfully!

🚀 Test your agent in M365 Copilot: https://m365.cloud.microsoft/chat/?titleId=T_d4f8b2a1-6e93-4c07-b5d1-3a9e7f1c8b42

Build Rich UI Widgets for Copilot Chat

With the ui-widget-developer skill, you can build MCP servers that return structured HTML content, rendering interactive widgets directly inside Copilot Chat. Think dashboards, forms, and data visualizations, all served through the MCP protocol.

GitHub Copilot

Build an MCP server that renders a ticket status dashboard as a widget in Copilot Chat.

skill(ui-widget-developer)

Create src/mcpServer.ts +64 -0

Create src/widgets/ticketDashboard.html +42 -0

Edit appPackage/declarativeAgent.json +6 -0

Scaffolded a RemoteMCPServer runtime with an HTML widget renderer. Created the dashboard tool returning a styled ticket-status card via the MCP Apps SDK. Registered the server on your declarative agent’s plugins list.

Wire Up MCP Servers

Need to connect your agent to an MCP server? Just describe it:

GitHub Copilot

Connect my agent to an MCP server at https://mcp.contoso.com/sse

Read appPackage/declarativeAgent.json (42 lines)

skill(declarative-agent-developer)

Validating HTTPS URL format for mcp.contoso.com… passed.

Edit appPackage/declarativeAgent.json +7 -0

Added MCP plugin reference with RemoteMCPServer runtime pointing to https://mcp.contoso.com/sse. Your agent can now call tools exposed by that server at runtime.

The skill knows the exact CLI commands,schema constraints, and version gates. It handles the plumbing; you focus on the scenario.

Prompt Engineering for Agent Code

Not all prompts are created equal. After dozens of these sessions, I’ve learned what separates a productive prompt from one that sends you in circles.

Be Specific About Schema Versions

Bad prompt:

GitHub Copilot

Make a declarative agent manifest

Good prompt:

GitHub Copilot

Generate a declarativeAgent.json using schema version v1.6 with WebSearch capability and file-referenced instructions

The version matters. The schema structure changes between versions, and a coding agent working from outdated training data might default to an older format. Call it out explicitly.

Include the Scenario, Not Just the Feature

Bad prompt:

GitHub Copilot

Add SharePoint capability

Good prompt:

GitHub Copilot

Add OneDriveAndSharePoint capability scoped to our HR policies site at https://contoso.sharepoint.com/sites/HRPolicies. The agent helps new employees find onboarding documents and benefits information.

When the coding agent understands the why, it makes better decisions about the how. The instructions it generates will be more relevant. The capability configuration will be more precise.

Reference Agents Toolkit Conventions

GitHub Copilot

Follow the M365 Agents Toolkit project structure conventions. Instructions should be in a separate instructions.txt file referenced from the manifest using $[file(‘instructions.txt’)] syntax. API plugin specs should be in the appPackage folder.

Coding agents work best when you anchor them to specific conventions. The Agents Toolkit Skills provide this context automatically, but reinforcing it in your prompts helps ensure consistency.

💡 Tip

Treat your prompts like you’d treat a design document. The more context and constraints you provide upfront, the less iteration you’ll need downstream. Spending 2 extra minutes on a detailed prompt saves 10 minutes of back-and-forth.

The Review Loop

Here’s something I learned the hard way: generating code fast is only valuable if the code is correct. Speed without quality is just fast failure.

That’s where review skills come in. After your coding agent generates the agent project, run it through a structured review covering three areas:

  1. Technical Review - Manifest schema matches the declared version, capability configurations are valid, the OpenAPI spec aligns with the actual API, and instructions are clear and unambiguous.

  2. Content Review - Instructions are free of hallucinated capabilities, tone is consistent across the prompt, and no instructions could cause unexpected behavior.

  3. Security Review - The API plugin uses appropriate authentication, SharePoint scopes are scoped tightly, and the instructions are resilient to prompt injection.

You can use coding agents for the review too. Ask your coding agent to audit the generated files against the official schema documentation. It’s agents reviewing agents. Meta on meta.

GitHub Copilot

Review the declarativeAgent.json and instructions.txt for correctness. Check that the schema version is v1.6, all capabilities are properly configured, and the instructions don’t contain any contradictions or overly broad permissions. Flag anything that could cause unexpected behavior in production.

skill(declarative-agent-developer)

Read appPackage/declarativeAgent.json (42 lines)

Read appPackage/instructions.txt (38 lines)

Checking schema version… v1.6 ✓

Checking capability names against allowed list… WebSearch ✓, OneDriveAndSharePoint ✓

Scanning instructions for contradictions… none found.

Scanning for overly broad permission patterns… 1 suggestion: consider tightening “help with support” to specific escalation scenarios only.

All validations passed. One minor suggestion flagged above.

This review loop (generate, review, refine)is what turns a 20-minute prototype into a production-ready agent.

Getting Started

Getting up and running takes about two minutes.

Prerequisites

Step 1: Add the Work IQ Marketplace

/plugin marketplace add microsoft/work-iq

Step 2: Install the Plugin

/plugin install microsoft-365-agents-toolkit@work-iq

That’s your one-time setup. Restart your coding agent, and you’re ready.

Step 3: Build Your First Agent

GitHub Copilot

Scaffold a new declarative agent for HR FAQ.

Add web search to my agent.

Deploy my agent.

That’s it. Three prompts. The skill walks you through naming, description, initial capabilities, produces a deployable project, wires up capabilities, and provisions to your tenant. You’ll have a working agent inside M365 Copilot in minutes.

For teams adopting coding agents in their daily workflow, this is a force multiplier. A developer who has never built a Copilot agent before can ship their first one in a single sitting, guided by the skill every step of the way.

📝 Note

The Work IQ plugin marketplace also includes the workiq plugin for querying Microsoft 365 data (emails, meetings, documents) and the workiq-productivity plugin for read-only productivity insights. Check out the full plugin catalog for details.

What’s Next

This workflow is evolving fast. Here’s where I see it heading:

  1. Multi-agent scaffolding - Prompt a coding agent to generate an entire fleet of agents with shared configurations and coordinated capabilities
  2. CI/CD integration - Coding agents generating GitHub Actions pipelines that validate, build, and deploy agents automatically
  3. Instruction testing - Automated test harnesses that verify agent behavior against expected outputs before deployment
  4. Skills ecosystem growth - As the Agents Toolkit Skills marketplace expands, coding agents will get even better at generating specialized agent configurations
  5. Deeper MCP integration - Tighter authoring-to-testing loops across the full agent lifecycle

The gap between “I have an idea for an agent” and “I have a deployed agent” is shrinking to minutes. And that changes everything about how we think about building AI solutions for the enterprise.

Start with something small. A team helper. A knowledge base assistant. A workflow automator. Install the Microsoft 365 Agents Toolkit plugin, fire up your coding agent, and scaffold it. Review the output. Deploy it. And watch what happens when AI builds AI.

It’s turtles all the way down.

If you want to dig deeper into the building blocks this workflow generates, check out scaffolding your first declarative agent, crafting effective agent instructions, API plugins for declarative agents, MCP servers for declarative agents, and localizing declarative agents.

Resources


Have questions or want to share what you’re building? Connect with me on LinkedIn.

Have questions or want to share what you're building? Connect with me on LinkedIn or check out more on The Manifest.