Staff UAT
How Unphish staff and clients run user acceptance testing.
User Acceptance Testing in v2 has two halves:
- Staff UAT — Unphish staff exercising the product against the 1,540-case workbook corpus to confirm v2 matches lived experience of v1.
- Client UAT — pilot client(s) exercising their portal against their real workflow, signing off on go-live.
This page covers both.
Staff UAT corpus
The staff UAT corpus was harvested from internal workbooks: 1,540 cases across 21 sections covering everything staff actually do day-to-day in v1. The corpus is the empirical record of how the product is used — not just what the spec says it should do.
Each UAT case has been reviewed and either:
- Mapped to an existing parity row — v2 already covers it.
- Added as a child row — the case revealed a v1 behavior the parity matrix had abstracted away (e.g., PAR-004.05, PAR-009.06).
- Folded into an OpenSpec recommendation — the case revealed a UI bug, UX gap, or spec ambiguity that was promoted to an explicit acceptance criterion.
The mapping document is docs/qa/staff-uat-functional-spec-gap-review-2026-05-03.md. It is the audit trail showing every staff UAT case has been considered.
What staff UAT looks like in practice
Staff UAT is not "click around and see if it feels right." It is structured:
- Pick a parity epic (e.g., PAR-008 enforcement).
- Identify the staff workbook section(s) that map to that epic.
- Run the seeded
qa_v1_parityfixture set in staging or a designated UAT environment. - Sign in with the appropriate persona — analyst, admin, partner, client reviewer — using a real cookie, not bypass.
- Walk every case in the workbook section. Confirm the v2 flow produces the same outcome as the documented v1 expectation.
- Record results in the matrix. Pass / fail / skip with reason. Failed cases get a bug filed; skipped cases get a justification.
- Run dark-mode and responsive QA for the surfaces that require it.
What gets covered
| Section | Personas | Surfaces |
|---|---|---|
| Sign-in and invite | All | /signin, /invite/[token] |
| Hub team | Staff/admin | /hub/team |
| Dashboard team | Org admin | /dashboard/team |
| Threat feed and triage | Analyst | /dashboard/threat-feed, case detail |
| Case detail and evidence | Analyst, partner | Case detail, evidence drawer |
| Client review queue | Client reviewer | /client/review |
| Enforcement creation (CleanDNS) | Analyst | Enforcement modal |
| Enforcement creation (social) | Analyst | Enforcement modal |
| Enforcement creation (registrar) | Analyst | Enforcement modal |
| Verification queue | Analyst | /dashboard/verification |
| Watchlist | Analyst, partner, client | /dashboard/watchlist, /client/watchlist |
| Whitelist | Analyst, partner | /dashboard/whitelist |
| Scan centre | Admin, partner | /admin/scans |
| Reports | All | Reports module |
| Intelligence | Analyst, partner | /dashboard/intelligence |
| Public API contract | API integrator | API endpoints |
| User lifecycle protections | Admin | /admin/users |
| Client delete policy | Admin | /admin/clients/[id] |
| Provider secret management | Hub admin | /hub/secrets |
| Impersonation / support preview | Hub admin | /admin/users/[id] → preview |
| Audit log search | Admin | /admin/audit |
Recommendations folded into the matrix
Staff UAT produced 26 recommendations covering UI bugs, UX gaps, and spec ambiguities. Each one has been promoted to an explicit row or acceptance criterion, including:
- Public API contract details (endpoints, scope behavior, error codes).
- Enforcement modal field-level behaviors.
- Whitelist entry types (Client Domain, Client Social, Partner Domain, Partner Social).
- Scan centre module configuration.
- Dashboard widget completeness (quota gauges, action-required queue, recent cases, top platforms/tags, issue split, closed type, provider responsiveness, SLA history, report runs).
- User lifecycle protections (cannot delete the only owner, cannot remove yourself from the org you administer).
- Client delete policy (soft delete with audit; hard delete requires explicit two-person approval).
The "FAQ Admin" workbook section is still pending product disposition: retire, merge, or add as a parity row. Pilot does not depend on it.
Client UAT for pilot
When a pilot client is preparing for go-live:
- Onboard them in staging with their real brands and policies.
- Walk them through the client portal — review queue, case submission, brand assets, reports.
- Run a guided pilot scenario: a real-looking threat moves through analyst review → client approval → enforcement (to a safe sink) → verification → closure.
- Have them sign off on each capability they will rely on. Their signoff is recorded in the pilot acceptance document.
- Identify any gaps that need filling before production cutover. Gaps go into the OpenSpec ledger and the parity matrix.
- Confirm reporting cadence and content. Pilot clients will receive at least one full reporting cycle in staging before production.
- Promote. Production cutover happens when staff UAT is green for the affected epics, client UAT is signed off, and release gates are clean.
Common UAT failure patterns
| Pattern | Root cause | Fix |
|---|---|---|
| "It works but the data is wrong" | Source state not labelled | Add the source label and assertion |
| "It works as analyst but breaks for client reviewer" | Capability gate missing | Wire the capability check at API and UI |
| "It works on Chrome but not Safari" | CSS or auth cookie posture | Test on Safari before signoff |
| "It works at 1440px but breaks at 1024px" | Responsive QA skipped | Add breakpoint to QA checklist |
| "It works in light mode but breaks in dark" | Dark mode skipped | Required for dashboard / threat feed / case detail / client review / partner / watchlist / whitelist / scan centre |
| "It worked yesterday" | Fixture changed or migration backed out | Re-seed fixtures; check /admin/imports for the affected migration |