April 29, 202610 min readAgent Building

How to Write Effective Agent Instructions in Copilot Studio That Actually Work

Your agent's instructions are the single biggest factor in whether it succeeds or fails. Here are the common mistakes makers keep making, and a pattern-based approach that produces consistently better agents.

Instructions are the operating system

Copilot Studio makes building agents easy. Drag in some knowledge sources, wire up a connector, hit publish. The part that gets the least attention? The instructions field on the Overview page. It's the text box most makers fill in last, with three generic sentences, then wonder why their agent goes off the rails.

Here's the thing: every decision your agent makes flows from those instructions. Which tool to call, how to phrase a response, when to escalate, what topics to avoid. The orchestrator reads your instructions and uses them to guide every single interaction. Vague instructions produce vague agents. Specific instructions produce agents that actually do what you built them to do.

How the orchestrator uses instructions

Copilot Studio's generative orchestrator reads your instructions in three moments: when deciding which tool or knowledge source to call, when filling input parameters for those tools, and when generating the final response. If your instructions are weak in any of these areas, the agent guesses. And it will guess wrong.

The evolution of "good enough"

Let's look at what most instruction sets look like in practice, from terrible to production-ready. These are based on real patterns we see across hundreds of agents.

Lazy

You are a helpful assistant. Answer questions about our products. Be professional.

No identity, no scope, no guardrails, no escalation rules. The agent will answer anything about anything, in whatever way it sees fit. "Be professional" is not an instruction the model needs. It already defaults to professional.

Better, but incomplete

You are a product support assistant for Contoso. Help customers find the right product and answer questions about features, availability, and compatibility. Always be polite and professional.

Has a role and some scope, but no constraints on what it shouldn't do, no escalation path, no knowledge source guidance, and no tone beyond "polite." When a user asks about pricing or a competitor, this agent will happily improvise.

Production-ready

You are Cora, the product support assistant for Contoso's retail division. You represent the Customer Success team.

Your goal is to help customers identify the right product for their needs without requiring human intervention. Where possible, resolve queries at first contact.

You have access to the Contoso Product Catalogue, updated monthly. Always use this as your primary source. If you cannot find the answer, tell the customer clearly and offer to raise a support ticket. Do not guess.

Never discuss pricing, promotions, competitor products, or pending product releases. If a customer raises a complaint or requests a refund, transfer immediately to a human agent; do not attempt to resolve it yourself.

Write in plain, friendly English. Keep responses concise. Use bullet points for comparisons or step-by-step answers. If a customer seems frustrated, acknowledge their issue before you respond.

Named persona, clear objective, scoped knowledge, explicit constraints, defined escalation triggers, tone guidance with emotional awareness. This agent knows exactly who it is and what it can and can't do.

The gap between the first and third example is the gap between a demo and a production deployment. Same Copilot Studio, same connectors, same knowledge sources. The only difference is the quality of the instructions.

The six things every instruction set needs

After analyzing what separates agents that work from agents that don't, a clear pattern emerges. Every effective instruction set covers six dimensions. Miss one and you've got a gap your users will find.

1

Role

Give your agent an identity, not just a job title

A role isn't "you are a helpful assistant." It's a named persona with a specific function, representing a specific team. This anchors the agent's behavior and makes it easier for users to understand what they're talking to.

Comparison

WeakYou are a helpful HR assistant.
StrongYou are Aria, the onboarding assistant for Northwind's People Operations team. You help new hires navigate their first 90 days.
2

Objectives

Define outcomes, not tasks

"Answer questions" is a task. "Resolve employee queries at first contact without escalation" is an objective. This distinction matters because objectives shape how the agent prioritizes when situations get ambiguous. Should it give a quick answer or a thorough one? Should it ask a follow-up or resolve immediately? The objective decides.

Comparison

WeakHelp users with IT problems.
StrongYour goal is to resolve common IT issues at first contact. Reduce ticket volume by guiding users through self-service solutions before escalating to the helpdesk.
3

Constraints

Tell it what NOT to do

This is where most instruction sets fall short. Makers tell the agent what to do but forget to tell it what it must never do. Without explicit constraints, the agent fills in the blanks. It will discuss pricing when it shouldn't. It will attempt to handle complaints it should escalate. It will hallucinate answers when it should say "I don't know."

Example constraints

Topic limitsNever discuss pricing, competitor products, or unreleased features.
EscalationIf a user mentions a complaint, refund, or legal issue, transfer to a human agent immediately. Do not attempt to resolve it.
HonestyIf you cannot find the answer in your knowledge sources, say so clearly. Do not guess or fabricate information.
4

Knowledge guidance

Ground the agent in real data

Your agent has knowledge sources configured on the Knowledge page. But instructions help prioritize them. Which source should be checked first? What happens when nothing is found? The orchestrator already knows which sources exist, so you don't need to list them all. Focus on guidance for ambiguous situations and fallback behavior.

Example

Use the FAQ documents only if the question isn't relevant to Hours, Appointments, or Billing. For benefit-related questions, search within the country folder that matches the employee's location.

5

Execution rules

Map tools to scenarios explicitly

When your agent has five or more tools, the orchestrator needs help deciding which to call and when. Don't say "use the booking tool when relevant." Say when, exactly, each tool should fire. And define what happens when a tool fails or returns no results.

Example

Use the ticket creation topic only for creating new tickets. For requests related to fixing existing issues, use the troubleshooting topic instead. Use the email address from the contact field when drafting follow-up emails.

6

Tone and format

Shape the user experience, not just the answer

You don't need to tell the agent to "be professional." That's already the default. What you do need is specific formatting guidance: should it use tables for comparisons? Bullet points for steps? Short sentences or detailed explanations? And critically, how should it handle frustrated or emotional users?

Example

Write in plain, friendly English at a general audience reading level. Keep responses concise. Use tables when comparing options. If a user seems frustrated, acknowledge their issue before providing a solution.

Common mistakes that break agents

Even experienced makers fall into these traps. Each one seems minor in isolation, but in production, they compound into agents that frustrate users and create support tickets instead of solving them.

🎯

Trying to control retrieval

Instructions influence how the agent summarizes answers, not how it retrieves documents. Trying to control search logic through instructions leads to unpredictable behavior. Configure retrieval through topics and knowledge settings instead.

🃏

Overriding Adaptive Cards

Agent instructions cannot modify how Adaptive Cards are triggered or displayed. If your cards aren’t working right, edit the card configuration directly. Don’t try to fix it through instructions.

📜

Writing a novel

More instructions aren’t always better. If your instructions are so complex the agent stops responding, strip them back and add them one by one. The system treats instructions like code. Bad code breaks things.

🔇

Disabling general knowledge

If you turn off “Use general knowledge” but rely on follow-up questions, the agent can’t ask them. It considers clarifying questions ungrounded and suppresses them, falling back to “I’m not sure how to help.”

🏷️

Mismatched tool names

When you reference a tool in your instructions, use its exact name. Even a slight spelling difference can prevent the orchestrator from matching the instruction to the right tool. Copy-paste, don’t retype.

🔀

Ignoring multi-agent impact

In a multi-agent architecture, weak instructions in the orchestrator cascade downstream. Child agents inherit ambiguous context, interpret it however they see fit, and the result is nearly impossible to debug.

A complete example: IT helpdesk agent

Let's put it all together. Here's what a production-ready instruction set looks like for a common scenario: an internal IT helpdesk agent.

Production-ready instructions

You are Atlas, the IT support assistant for Northwind Traders. You represent the IT Service Desk team.

Your goal is to resolve common IT issues at first contact without creating a support ticket. Reduce ticket volume by guiding users through self-service solutions. If you can solve the issue, solve it. If you can't, create a ticket and set expectations.

Use the IT Knowledge Base as your primary source for troubleshooting steps. For password and account issues, use the Identity Management tool. For hardware requests, use the Asset Request tool. Use the FAQ documents only if the question doesn't relate to troubleshooting, passwords, or hardware.

Never attempt to access or modify user accounts directly. Never provide security credentials or bypass instructions. If a user reports a security incident or suspected breach, escalate immediately to the Security Operations topic. Do not attempt to resolve security issues yourself.

When creating a ticket, use the Ticket Creation topic. Always include the user's name, issue description, and troubleshooting steps already attempted. Set the priority based on impact: P1 for full outage, P2 for degraded service, P3 for questions and requests.

Respond in plain English. Keep answers short and actionable. Use numbered steps for troubleshooting procedures. If a user seems frustrated, acknowledge the inconvenience before walking through the solution. Always confirm the issue is resolved before ending the conversation.

That instruction set covers all six dimensions: role (Atlas, IT Service Desk), objective (first-contact resolution), constraints (no security overrides, no account access), knowledge guidance (KB first, FAQ as fallback), execution rules (which tool for which scenario), and tone (plain English, numbered steps, empathy for frustrated users).

The instructions are just the beginning

Writing great instructions is critical, but it's only half the equation. Instructions define how your agent should behave. Evaluation tells you whether it actually does.

The best instruction sets in the world still need testing. Knowledge sources change. User questions evolve. Tool behavior shifts. What worked last month might silently break tomorrow. You need a way to continuously evaluate whether your agent's actual behavior matches the behavior your instructions intended.

That's the gap Agentowr fills. Write the best instructions you can. Then let Agentowr watch your agents in production, catch regressions, and tell you exactly when and where behavior drifts from what you designed.

Great instructions deserve great monitoring

You wrote the instructions. Agentowr makes sure your agents follow them. Continuous evaluation, real-time alerts, zero infrastructure.