Trust and Accuracy Dashboard
Trust & Accuracy tracks how recommendations perform against measured outcomes and helps tune automation confidence.
Updated 2026-03-18
Trust & Accuracy tracks how recommendations perform against measured outcomes and helps tune automation confidence.
Prerequisites
- Connected Amazon account
- Selected marketplace
- Enough recommendation outcomes for meaningful statistics
Expected outcome: Trust metrics are interpretable and actionable.
Quick Start
- Open
Trust & Accuracy. - Review summary cards and measured outcomes count.
- Check low-accuracy warning if present.
- Review charts for accuracy by type, prediction scatter, confidence distribution, and data readiness.
- Use the activity feed and shadow report to decide whether to adjust Auto-Apply or Smart Tuning settings.
Expected outcome: Accuracy signals are translated into concrete settings changes.
Detailed Workflows
Workflow: Accuracy Remediation Loop
- Identify low-performing recommendation types.
- Increase confidence threshold or tighten exclusions where needed.
- Watch for the low-accuracy warning when measured outcomes are sufficient.
- Monitor subsequent outcomes and re-assess after sufficient new samples.
Expected outcome: Accuracy trends improve with controlled policy changes.
Workflow: Validate Data Readiness for Calibration
- Open readiness and report indicators.
- Confirm measured outcomes and report counts.
- Move to Smart Tuning activation when minimum criteria are met.
Expected outcome: Calibration activation is evidence-based.
Workflow: Evaluate Shadow Decision Alignment
- Review shadow/simulation report section.
- Compare predicted vs operator decisions.
- Decide whether to move to stronger automation mode.
Expected outcome: Mode progression uses measured alignment data.
Workflow: Feed Insights Back into Operations
- Use activity feed to identify repeated failure patterns.
- Map issues to recommendation types.
- Update SOP, Auto-Apply policy, or Smart Tuning expectations.
Expected outcome: Trust dashboard becomes a closed-loop quality control tool.
Workflow: Launch Auto-Apply from Trust & Accuracy
- Open
Trust & Accuracy. - If Auto-Apply onboarding has not been completed and the page has data, click
Set Up Auto-Apply. - Complete the wizard:
Comfort Level,Safety Net,Activate. - Return to the dashboard and monitor summary cards, circuit-breaker state, and shadow/live behavior.
Expected outcome: Operators can move directly from observability into controlled automation setup.
Common Errors
Error: Dashboard appears empty
- Confirm recommendation outcomes exist.
- Continue operations to accumulate measured results.
- If Auto-Apply has not been configured yet, expect the welcome state and use it as the setup entry point.
- Revisit after additional cycles.
Expected outcome: Empty state is understood as data-readiness stage.
Error: Low-accuracy warning persists
- Verify sample size is meaningful.
- Tighten high-risk automation settings.
- Re-check after outcome lag window.
Expected outcome: Persistent warning triggers deliberate tuning, not blind toggling.
Error: Chart data intermittently fails to load
- Refresh page.
- Validate account/marketplace context.
- Retry later if backend source is transiently unavailable.
Expected outcome: Operator distinguishes transient data fetch issues from persistent defects.
FAQ
When should I pay attention to the low-accuracy warning?
The warning appears when accuracy drops below 70% and there are at least 10 measured outcomes.
Is this the same thing as Auto-Apply settings?
No. Trust is observability; Auto-Apply is execution policy.
Should I use this page before turning automation up?
Yes. It helps validate recommendation quality before increasing automation scope.
Last Updated
- Last updated: 2026-03-18
- Version assumptions: current
TrustDashboardPagewarning logic, chart set, activity feed, shadow report, Smart Tuning summary, and Auto-Apply launch entry point
