Skip to main content
Every Percio report leads with a usability score between 0 and 100. It’s designed to be directional, not absolute — a fast signal to tell you whether a flow is in good shape or needs work.

How it’s calculated

The score is derived from the severity of the issues a test found. Issues at higher severities (critical, high) pull the score down more than lower-severity issues (medium, low). Positive notes don’t increase the score — they provide context, not credit. The score is specific to the persona who ran the test. A flow can score 90 for a power user and 55 for a first-timer — comparing two reports is how you see that contrast, not combining them into one number.

How to read the number

  • 85–100 — strong flow. Most issues are minor and tactical.
  • 65–84 — solid flow with real issues to address. Ship the fixes in the next cycle.
  • 40–64 — significant usability problems. Worth prioritizing above most new work.
  • Below 40 — the flow is broken for at least one persona. Treat as a blocker.
These bands are guidelines, not guarantees. A score of 70 on an early-stage prototype is different from a 70 on a mature production flow — context matters.

What the score doesn’t capture

  • Aesthetic quality. Percio evaluates usability, not brand or visual polish.
  • Performance. If the flow was slow but eventually worked, that may not show up in the score.
  • Success rate. A persona that completed the scenario will still score lower if it hit real issues along the way.
  • Business metrics. Usability affects conversion, retention, and support load — but Percio can’t measure those directly.

Comparing scores across runs

Two runs of the same scenario and the same persona on the same product are roughly comparable. Runs with different personas, different scenarios, or against different products are not — don’t compare a landing page score to a checkout flow score.

What’s next