Skip to main content
Before you go deeper, here’s the vocabulary the rest of the docs uses.

Test

A single usability evaluation of a URL. A test has a scenario, a persona, and produces a result. Every test run consumes credits — see Credits.

Scenario

The task you want a persona to attempt. Good scenarios describe what the user is trying to accomplish, not step-by-step instructions. Examples:
  • “Find a pair of running shoes under $120 and add them to the cart”
  • “Sign up for an account and post your first comment”
  • “Cancel an existing subscription”
Bad scenarios are too vague (“explore the site”) or too prescriptive (“click button X, then Y, then Z”). See Writing good scenarios.

Persona

The lens Percio uses to evaluate your product. A persona has an occupation, a device context, a tech expertise level, focus areas, pain points, behavioral traits, priority Nielsen heuristics, and a full system prompt that governs how it interprets what it sees. Personas in Percio are either created during onboarding, created by you on the /personas page, or generated via the Agent’s MCP tools. There are no generic built-in personas — every persona is specific to your product. See Personas overview.

Flow

The sequence of steps the browser actually took during a test — click here, type there, scroll down, navigate away. The flow is what the persona evaluates after the run is complete.

Step

One action inside a flow. Each step has: a screenshot captured before the action, the agent’s reasoning, and the action it took (click, type, select, scroll, wait, navigate, complete, or abandon). Tests cap at 50 steps by default in the agent (you can adjust with --max-steps). Tests can end early if the agent hits 3 consecutive failures, gets stuck below 50% progress, or decides the task is done.

Issue

A specific usability problem a persona found. Each issue has:
  • Severity (critical / high / medium / low)
  • Nielsen heuristic it violates
  • What’s happening — the observed problem
  • Why it matters — the user impact
  • Who it affects most — when the issue hits some users harder than others
  • Recommendation — a concrete fix
After a test, you can mark each issue Confirmed, False positive, or Dismissed.

Heuristic

A usability principle from Jakob Nielsen’s 10 heuristics (visibility of system status, match between system and real world, user control, consistency, error prevention, recognition over recall, flexibility, aesthetic and minimalist design, help users recover from errors, help and documentation). Every issue ties to one so you know why it’s a problem, not just that it’s a problem.

Credit

The unit that meters test runs. Running a test costs credits — creating a persona in the web app does not. Your plan (Indie or Pro) determines how many credits you get per billing cycle. If you run out, new test runs are blocked until the next cycle or an upgrade. See Credits.

Execution

A single test run identified by an executionId. The live test page lives at /test-running/[executionId] and the final report lives at /results/[executionId]. You can share a results URL with teammates on your account.

Report retention

Results are kept for the lifetime of your plan window — 90 days on Indie, 365 days on Pro. After that they’re deleted. If long-term archiving matters to you, export what you need before it ages out.

What’s next