Unphish v2 Docs

Staff UAT

How Unphish staff and clients run user acceptance testing.

User Acceptance Testing in v2 has two halves:

  1. Staff UAT — Unphish staff exercising the product against the 1,540-case workbook corpus to confirm v2 matches lived experience of v1.
  2. Client UAT — pilot client(s) exercising their portal against their real workflow, signing off on go-live.

This page covers both.

Staff UAT corpus

The staff UAT corpus was harvested from internal workbooks: 1,540 cases across 21 sections covering everything staff actually do day-to-day in v1. The corpus is the empirical record of how the product is used — not just what the spec says it should do.

Each UAT case has been reviewed and either:

  • Mapped to an existing parity row — v2 already covers it.
  • Added as a child row — the case revealed a v1 behavior the parity matrix had abstracted away (e.g., PAR-004.05, PAR-009.06).
  • Folded into an OpenSpec recommendation — the case revealed a UI bug, UX gap, or spec ambiguity that was promoted to an explicit acceptance criterion.

The mapping document is docs/qa/staff-uat-functional-spec-gap-review-2026-05-03.md. It is the audit trail showing every staff UAT case has been considered.

What staff UAT looks like in practice

Staff UAT is not "click around and see if it feels right." It is structured:

  1. Pick a parity epic (e.g., PAR-008 enforcement).
  2. Identify the staff workbook section(s) that map to that epic.
  3. Run the seeded qa_v1_parity fixture set in staging or a designated UAT environment.
  4. Sign in with the appropriate persona — analyst, admin, partner, client reviewer — using a real cookie, not bypass.
  5. Walk every case in the workbook section. Confirm the v2 flow produces the same outcome as the documented v1 expectation.
  6. Record results in the matrix. Pass / fail / skip with reason. Failed cases get a bug filed; skipped cases get a justification.
  7. Run dark-mode and responsive QA for the surfaces that require it.

What gets covered

SectionPersonasSurfaces
Sign-in and inviteAll/signin, /invite/[token]
Hub teamStaff/admin/hub/team
Dashboard teamOrg admin/dashboard/team
Threat feed and triageAnalyst/dashboard/threat-feed, case detail
Case detail and evidenceAnalyst, partnerCase detail, evidence drawer
Client review queueClient reviewer/client/review
Enforcement creation (CleanDNS)AnalystEnforcement modal
Enforcement creation (social)AnalystEnforcement modal
Enforcement creation (registrar)AnalystEnforcement modal
Verification queueAnalyst/dashboard/verification
WatchlistAnalyst, partner, client/dashboard/watchlist, /client/watchlist
WhitelistAnalyst, partner/dashboard/whitelist
Scan centreAdmin, partner/admin/scans
ReportsAllReports module
IntelligenceAnalyst, partner/dashboard/intelligence
Public API contractAPI integratorAPI endpoints
User lifecycle protectionsAdmin/admin/users
Client delete policyAdmin/admin/clients/[id]
Provider secret managementHub admin/hub/secrets
Impersonation / support previewHub admin/admin/users/[id] → preview
Audit log searchAdmin/admin/audit

Recommendations folded into the matrix

Staff UAT produced 26 recommendations covering UI bugs, UX gaps, and spec ambiguities. Each one has been promoted to an explicit row or acceptance criterion, including:

  • Public API contract details (endpoints, scope behavior, error codes).
  • Enforcement modal field-level behaviors.
  • Whitelist entry types (Client Domain, Client Social, Partner Domain, Partner Social).
  • Scan centre module configuration.
  • Dashboard widget completeness (quota gauges, action-required queue, recent cases, top platforms/tags, issue split, closed type, provider responsiveness, SLA history, report runs).
  • User lifecycle protections (cannot delete the only owner, cannot remove yourself from the org you administer).
  • Client delete policy (soft delete with audit; hard delete requires explicit two-person approval).

The "FAQ Admin" workbook section is still pending product disposition: retire, merge, or add as a parity row. Pilot does not depend on it.

Client UAT for pilot

When a pilot client is preparing for go-live:

  1. Onboard them in staging with their real brands and policies.
  2. Walk them through the client portal — review queue, case submission, brand assets, reports.
  3. Run a guided pilot scenario: a real-looking threat moves through analyst review → client approval → enforcement (to a safe sink) → verification → closure.
  4. Have them sign off on each capability they will rely on. Their signoff is recorded in the pilot acceptance document.
  5. Identify any gaps that need filling before production cutover. Gaps go into the OpenSpec ledger and the parity matrix.
  6. Confirm reporting cadence and content. Pilot clients will receive at least one full reporting cycle in staging before production.
  7. Promote. Production cutover happens when staff UAT is green for the affected epics, client UAT is signed off, and release gates are clean.

Common UAT failure patterns

PatternRoot causeFix
"It works but the data is wrong"Source state not labelledAdd the source label and assertion
"It works as analyst but breaks for client reviewer"Capability gate missingWire the capability check at API and UI
"It works on Chrome but not Safari"CSS or auth cookie postureTest on Safari before signoff
"It works at 1440px but breaks at 1024px"Responsive QA skippedAdd breakpoint to QA checklist
"It works in light mode but breaks in dark"Dark mode skippedRequired for dashboard / threat feed / case detail / client review / partner / watchlist / whitelist / scan centre
"It worked yesterday"Fixture changed or migration backed outRe-seed fixtures; check /admin/imports for the affected migration

On this page