Skip to main content
Below the usability score, Percio generates a narrative summary of the run. It’s structured into four parts and is designed to be the section you show your team in a review.

Executive summary

A 2–4 sentence overview of how the test went. What the persona was trying to do, how it went, and the headline takeaway. Read this first — it frames everything else in the report.

Biggest risk

The single issue Percio considers the most dangerous for your product. It’s often (but not always) the most severe issue — Percio weighs severity alongside how central the issue is to the flow. If the biggest risk is something you can fix quickly, do that before running the next test. If it’s a deeper structural problem, use it as ammunition to prioritize the work.

Drop-off patterns

Where the persona got stuck, hesitated, or almost abandoned. This is the section most connected to your conversion funnel — drop-off patterns in the usability test map directly to drop-off patterns you’d see in real analytics. Common drop-off patterns Percio surfaces:
  • Decision paralysis on choice-heavy pages.
  • Confusion at state transitions (e.g. “I don’t know what this page is for”).
  • Implicit expectations the product doesn’t meet (e.g. expected a confirmation email, didn’t receive one).
  • Trust anxiety at high-stakes moments (payment, account creation, irreversible actions).

Quick wins

Small, high-leverage changes Percio thinks would improve the experience significantly. Quick wins are characterized by being:
  • Cheap to implement — copy changes, button labels, reordering, removing friction.
  • Directly observed — tied to something the persona actually struggled with, not hypothetical.
  • High-impact — fixing them would meaningfully change the persona’s experience.
Treat quick wins as a shortlist of things to try between now and the next test.

How the summary is generated

Every part of the narrative is produced by Claude after it’s seen the full flow and issues — it’s not pulled from a template. That means the language reflects your specific product and scenario, but also that wording can vary run-to-run. Comparisons across runs should be done on the issues list and score, not on summary text alone.

What’s next