Customer Support Quality Assurance Scorecard

Build a comprehensive QA scorecard to evaluate support agent performance, ensure consistent service quality, and identify coaching opportunities.

Prompt Template

You are a customer support operations expert. Build a comprehensive Quality Assurance (QA) scorecard for my support team.

Team context:
- Team size: [number] agents
- Support channels: [email / live chat / phone / social media]
- Industry: [your industry]
- Average ticket volume: [tickets per day/week]
- Current QA process: [none / informal reviews / existing scorecard that needs improvement]
- Main quality issues observed: [e.g., slow response, incorrect info, lack of empathy, inconsistent tone]

Please create:

1. **QA Scorecard** — A weighted scoring rubric with 5-7 evaluation categories, each with specific criteria scored 1-5. Include:
   - Category name and weight (%)
   - What a score of 1, 3, and 5 looks like for each
   - Auto-fail criteria (issues that flag immediate review regardless of overall score)

2. **Calibration Guide** — 3 example ticket reviews with scores and reasoning to ensure all reviewers grade consistently.

3. **Review Process** — Recommended review cadence, sample size per agent, reviewer assignment, and escalation thresholds.

4. **Coaching Framework** — How to turn QA scores into actionable coaching conversations (not punitive reviews). Include a 1:1 meeting template.

5. **Reporting Dashboard** — Key QA metrics to track at team and individual level, with benchmarks for "needs improvement," "meets expectations," and "exceeds expectations."

6. **Continuous Improvement Loop** — How to use QA data to update training materials, identify knowledge gaps, and improve processes quarterly.

Example Output

# Customer Support QA Scorecard: SaaS Company (8 agents)

1. QA Scorecard

Total Score: 100 points (weighted)

Category 1: Accuracy & Completeness (30%)

*Did the agent provide correct, complete information?*

| Score | Description |

|-------|-------------|

| 1 | Incorrect information given; customer would be misled |

| 3 | Mostly correct but missing key details or caveats |

| 5 | Fully accurate, complete answer addressing all parts of the question + anticipated follow-ups |

Category 2: Empathy & Tone (20%)

*Did the agent acknowledge the customer's situation and respond with appropriate warmth?*

| Score | Description |

|-------|-------------|

| 1 | Robotic, dismissive, or defensive tone |

| 3 | Professional but generic — no personalization to the customer's situation |

| 5 | Genuinely empathetic, acknowledged frustration/situation specifically, warm and human tone |

Category 3: Resolution & Effectiveness (25%)

*Was the issue actually resolved (or properly escalated)?*

| Score | Description |

|-------|-------------|

| 1 | Issue not resolved, no clear next steps, customer left hanging |

| 3 | Issue partially resolved or resolved but required unnecessary back-and-forth |

| 5 | Issue fully resolved in minimum interactions, or properly escalated with warm handoff |

Category 4: Communication Clarity (10%)

| Score | Description |

|-------|-------------|

| 1 | Confusing, jargon-heavy, or poorly structured response |

| 3 | Clear but could be more concise or better formatted |

| 5 | Crystal clear, well-formatted (bullet points, steps), appropriate reading level |

Category 5: Process Adherence (10%)

| Score | Description |

|-------|-------------|

| 1 | Skipped required steps (tagging, internal notes, escalation protocol) |

| 3 | Most processes followed with minor gaps |

| 5 | All processes followed: correct tags, internal notes, proper categorization |

Category 6: Proactive Support (5%)

| Score | Description |

|-------|-------------|

| 1 | Answered only the literal question asked |

| 3 | Answered the question + pointed to relevant help doc |

| 5 | Answered the question + anticipated next issue + provided preventive guidance |

⚠️ Auto-Fail Criteria

These trigger immediate manager review regardless of overall score:

- Shared incorrect billing/pricing information

- Revealed another customer's data

- Made a promise the company can't keep

- Inappropriate or unprofessional language

- Closed ticket without any resolution or next steps

3. Review Process

| Parameter | Recommendation |

|-----------|---------------|

| Review cadence | Weekly |

| Tickets per agent per week | 5 (random sample across channels) |

| Reviewer assignment | Rotate reviewers monthly to reduce bias |

| Minimum passing score | 70/100 |

| Coaching trigger | Score below 70 OR 2+ weeks below 80 |

| Recognition trigger | 3+ consecutive weeks above 90 |

4. Coaching Framework

1:1 QA Coaching Meeting Template (30 min, biweekly)

1. **Wins first (5 min):** Start with 1-2 tickets the agent handled exceptionally. Be specific about what was great.

2. **Score review (5 min):** Share average scores by category. Focus on trends, not individual tickets.

3. **Deep dive (10 min):** Pick ONE area for improvement. Review a specific ticket together:

- "What was your thinking here?"

- "What would you do differently?"

- "Let me show you how I'd approach this..."

4. **Action item (5 min):** One specific, measurable goal for next 2 weeks.

5. **Agent's turn (5 min):** "What's blocking you? What tools or training would help?"

**Key principle:** QA is for growth, not punishment. Never use QA scores in isolation for performance decisions.

Tips for Best Results

  • 💡Start by reviewing 3 tickets per agent per week — it's enough for trends without overwhelming reviewers.
  • 💡Calibrate monthly: have 2-3 reviewers independently score the same 5 tickets, then discuss differences until alignment.
  • 💡Weight empathy and accuracy highest — customers forgive slow responses but not wrong answers or cold interactions.
  • 💡Share anonymized 'best of' examples team-wide every week to set the standard through positive examples, not criticism.