- Choose one primary metric per moment you care about.
- Use the right scale for the question (and keep it consistent).
- Add a few “why” questions so you can act on the score.
Common metrics (NPS, CSAT, CES)
Most teams only need a few metrics. The key is choosing a metric that fits the moment you’re measuring.
Use one primary metric, then add 2–4 supporting questions that explain why the score moves (and what to fix).
CSAT (Satisfaction)
CES (Effort)
NPS (Recommendation)
Evidence and outcomes
How to connect metrics to business outcomes without overclaiming.
No single score proves causation. But in large-scale research, experience signals (especially satisfaction) are often associated with future business performance when measured consistently over time.
The most reliable pattern is practical: metrics help when they’re tied to a specific decision. If you can’t explain what you’d change based on the result, the measurement isn’t decision-ready yet.
Treat NPS as a broad, relationship-level trend and use CSAT/CES for specific moments. Then add 2–4 drivers so you can act, and segment so you know who needs what.
CSAT (Satisfaction)
CES (Effort)
NPS (Recommendation)
- ACSI-based research on customer satisfaction and future cash flow / shareholder value (e.g., Fornell et al.; related work).
- Academic comparisons of NPS vs. satisfaction and their predictive performance (e.g., Keiningham et al.).
- Customer effort research and practice guidance popularized by service research and HBR (CES as friction indicator).
- Practical reviews on when NPS helps and when it doesn’t (e.g., MIT Sloan Management Review and similar).
Mini glossary
A few terms you'll see in measurement work.
Likert scales (and why they matter)
A scale is the set of answer options people choose from (for example 1–5). It sounds small, but it decides whether results are interpretable.
Match the scale to the question: satisfaction for experiences, ease for effort, confidence for readiness, agreement for statements. When the scale is wrong, the score is hard to trust—and hard to use.
Practical guardrails
Your goal is repeatability. If the scale or wording changes, results stop being comparable.
Treat metrics as signals, not truths. Combine them with open feedback and simple segmentation to decide what to do next.
- Use one direction everywhere (low → high) and keep labels consistent.
- Anchor questions to a specific touchpoint and time window (e.g., “this support call”).
- Ask fewer questions, but ask them well (clarity beats quantity).
- Always include at least one open question: “What’s the main reason for your score?”