The Offer Package Agent was built live during a hands-on workshop hosted at Microsoft Garage, designed to help HR and business leaders move from AI insight to real deployment using Microsoft Copilot.
After evaluating multiple AI workflow options, the group selected the Offer Package Agent as the highest-impact starting point due to its direct connection to revenue, time-to-hire, and pay equity risk. This workflow consistently surfaced as a major bottleneck across organisations, slowing hiring decisions and causing candidate drop-off.
Agent name: Offer Package Agent
Primary user: Recruiters, Hiring Managers, HR Business Partners
Task this agent supports: Creating and approving competitive compensation offers
Stage of work: Talent Acquisition – Offer Creation
What success looks like:
The first step is not building the agent — it’s understanding the work.
This agent supports a single task:
Determining and justifying an appropriate compensation offer for a role.
That task was broken down into clear subtasks.
Subtasks:
This task-level clarity is critical. The agent does not replace a role — it supports a specific piece of work.
Next, the workflow was redesigned before introducing AI.
Current workflow:
Recruiters request guidance from the rewards team, often via email or spreadsheets. Responses can take days or weeks due to limited team capacity, causing hiring delays and candidate drop-off.
AI-enabled workflow:
The Offer Package Agent sits directly inside the recruiter workflow, providing immediate guidance using approved compensation data.
Subtasks handled by the agent:
Subtasks remaining human-led:
This agent operates at the subtask level, not end-to-end autonomy.
Agent responsibilities:
Guardrails:
Human-in-the-loop design was a deliberate choice to manage risk and build trust.
The agent was built using Microsoft Copilot, leveraging tools most organisations already have.
Inputs required:
Outputs produced:
Sample agent instruction:
“Based on approved internal compensation data, recommend an offer package for this role. Ensure alignment with pay bands and equity rules. Flag any risks or exceptions and explain your reasoning.”
Success is not measured by Copilot usage or logins.
What to measure (before vs after):
If the agent does not deliver measurable value, it should be adjusted or shut down. If it works, it should be scaled.
Agents fail when expectations don’t change.
This agent required a clear shift in how recruiters and managers work.
Enablement checklist:
Without expectation-setting, adoption typically stalls at around 20%.
AI agents are quickly becoming a foundational component of modern workforce strategy. This session reinforced that agents work best when they are built at the task and subtask level, supported by redesigned workflows and clear guardrails.
Workflow design matters more than the technology itself. Start small, prove value, and then scale.
If your team needs support identifying the right work to reinvent, redesigning workflows, and building AI agents that actually get adopted, we can help.
Get in touch to learn how we partner with organisations to move from AI ideas to deployed, measurable outcomes.
→ Explore more Agent Building Guides