Skip to main content
Scenario quality drives report quality more than any other input. Here’s how to write ones that produce useful findings.

What a good scenario looks like

A good scenario describes what the user is trying to accomplish, leaves the how to the persona, and gives Percio enough context to know when the task is complete.
Find a pair of running shoes under $120, add them to the cart,
and proceed to checkout.
Each of these:
  • States a clear goal.
  • Implies the criteria for success (reaching a specific page, seeing a confirmation).
  • Leaves the path open — the persona can explore and get stuck in ways that surface real usability issues.

What bad scenarios look like

Explore the site.
  • Too vague gives the persona nothing to evaluate. The report will be generic.
  • Too prescriptive turns the test into a script instead of a usability evaluation. You’ll learn nothing about where real users would get stuck.
  • Ambiguous success leaves Percio unable to decide when it’s done — it may wander, abandon early, or declare a flawed run complete.

The three rules

  1. State the goal, not the steps. “Buy a blue shirt” is better than “click this, then that.”
  2. Give enough context to know when it’s done. Does success mean landing on a thank-you page? Seeing a confirmation? Getting an email?
  3. Match the scenario to the persona. A “power user” scenario shouldn’t be the same as a “first-time visitor” scenario — see Picking a persona.

Useful context to include

Anything Percio can’t infer from the URL belongs in the scenario:
  • Sign-in state. “Assume you’re already signed in” or “you do not have an account yet.”
  • Inventory or state constraints. “The cart is empty.” “You have one unread notification.”
  • Feature flags or hidden entry points. “The new onboarding is behind the ?v=2 query param.”
  • Time or region constraints. “You’re shopping from the US.” “It’s the middle of business hours.”
  • Prior actions. “You just received a payment failure email.”

Scenario anti-patterns

  • Multiple unrelated goals in one scenario. Run separate tests — you’ll get cleaner findings.
  • Tasks that require credentials Percio doesn’t have. If sign-in is required, either pre-authenticate on a staging environment where the flow is open, or scope the scenario to the signed-out experience. The agent CLI supports more flexibility here.
  • Scenarios that depend on external services. Third-party OAuth, payment providers in test mode with strict requirements, and email verification can all block the agent if they’re in the critical path. Call them out explicitly or scope around them.

Iterating on a scenario

If your first report reads as generic or off-target, the scenario is almost always the culprit. Treat scenario writing as an iterative skill — each run teaches you how to be more specific the next time.

What’s next