CSAT - Customer Satisfaction Score: What It Is, How It Works, and How to Use It
CSAT (Customer Satisfaction Score) is one of the most widely used metrics in customer experience — a fast, simple way to measure how satisfied customers are with a specific interaction, product, or service. Unlike NPS, which measures overall loyalty, CSAT is designed to capture the customer's sentiment at a particular moment or touchpoint.
CSAT is typically gathered through a single survey question — "How satisfied were you with [product/service/interaction]?" — scored on a scale of 1–5 or 1–10. The resulting score gives organisations a consistent, trackable measure of how well specific parts of the customer journey are performing.
This guide covers what CSAT is, how to calculate it, when to use it, its strengths and limitations, and practical guidance on getting more from your CSAT program — including how it fits alongside NPS and Customer Effort Score as part of a complete CX measurement framework.
What CSAT measures
Customer satisfaction with a specific interaction, product, or touchpoint — captured immediately after the experience while it is fresh.
Why it matters
It gives CX and contact centre teams a direct, actionable signal on how well specific touchpoints are performing — and where to prioritise improvement.
What this guide covers
The definition, history, how to calculate CSAT, when to measure it, pros and cons, cultural considerations, and tips for effective measurement.
What is CSAT?
CSAT, short for Customer Satisfaction Score, is a metric used to measure how satisfied customers are with a specific product, service, or interaction. It is gathered through a brief post-experience survey — typically a single question asking customers to rate their satisfaction on a numerical scale or using visual indicators such as emoji faces or star ratings.
The result is expressed as a percentage: the proportion of respondents who gave a positive rating (typically the top two options on the scale) out of all respondents. This makes it easy to track, compare, and benchmark across time, channels, teams, and touchpoints.
CSAT is one of three core CX metrics — alongside Net Promoter Score (NPS) and Customer Effort Score (CES). Each measures a different dimension of the customer experience, and the most mature CX programs use all three in combination.
In plain English
CSAT asks: "How satisfied were you with that?" It captures the customer's immediate reaction to a specific experience — making it the most targeted of the three core CX metrics.
✓ What CSAT is
- A transactional, touchpoint-level metric
- Fast to collect — typically a single question
- Expressed as a percentage for easy comparison
- Actionable at the team and channel level
- Most useful when measured immediately after an interaction
✕ What CSAT is not
- A measure of overall brand loyalty (that is NPS)
- A measure of how easy the experience was (that is CES)
- A complete picture of customer sentiment on its own
- Universally consistent across cultures
- Sufficient as a sole CX measurement strategy
A Brief History of Customer Satisfaction Measurement
The concept of measuring customer satisfaction has roots that stretch back to the earliest forms of commerce — a merchant who sold poor-quality grain quickly learned that dissatisfied customers don't return. But the formalisation of CSAT as a structured metric began in the 1970s and has evolved significantly since.
1970s — Structured surveys emerge
Businesses began using formal surveys to gather customer feedback systematically, introducing Likert scales as a consistent way to measure satisfaction responses.
1980s — Research links CSAT to loyalty
Researchers began building statistical models connecting satisfaction to repurchase behaviour — laying the foundation for metrics like NPS which would follow in the 2000s.
1990s–2000s — Digital feedback arrives
Online surveys transformed CSAT collection — broader reach, real-time responses, and integration into websites, email, and app touchpoints made continuous feedback collection practical.
2010s–now — Journey-level measurement
The shift from isolated touchpoint measurement to journey-wide CX programs. CSAT is now one component of a broader measurement framework including NPS, CES, and predictive analytics.
How to Calculate CSAT
CSAT calculation is straightforward. The score is expressed as a percentage of positive responses — typically defined as the top two scores on the rating scale — out of all responses received.
CSAT Formula
CSAT (%) = (Number of Positive Responses ÷ Total Responses) × 100
Define your survey question
Choose a clear, specific question — for example: "How satisfied were you with your interaction today?" or "How would you rate your overall experience with [product/service]?" Use a consistent scale — typically 1–5 or 1–10 — across all touchpoints you want to compare.
Collect responses
Distribute the survey after the relevant interaction — post-call, post-purchase, post-onboarding, or at other key journey touchpoints. Common channels include automated post-call IVR surveys, email, SMS, web pop-ups, in-app prompts, or in-person kiosks.
Identify your positive responses
On a 1–5 scale, responses of 4 or 5 are typically counted as positive. On a 1–10 scale, 8, 9, and 10 are commonly used. Define your threshold consistently and stick to it — changing the definition mid-stream makes trend data meaningless.
Calculate the score
Divide the number of positive responses by the total number of responses, then multiply by 100. For example: 150 positive responses out of 200 total = 75% CSAT. As a general guide: below 50% indicates significant issues; 50–74% is moderate; 75%+ is good.
Track trends and act on the data
A single score is a snapshot. Track CSAT over time, across channels, by agent team, by contact type, and by customer segment to identify patterns. Pair scores with open-text qualitative responses to understand the "why" and prioritise where to improve.
What a CSAT Survey Looks Like
CSAT surveys can use numerical scales, star ratings, or emoji/icon-based visual scales. The emoji format is increasingly popular because it is fast to complete, visually intuitive, and works well across digital and mobile channels. Here is an example of a simple five-point emoji CSAT question:
In this format, the two rightmost (positive) faces — the smiling and broadly smiling emojis — represent the positive responses counted in the CSAT calculation. The neutral and negative faces are excluded from the score, just as Passives are excluded from NPS calculations.
Multi-Dimension CSAT Surveys
CSAT can also be used to measure satisfaction across multiple aspects of a single experience simultaneously — asking customers to rate different elements such as customer service, product selection, or store atmosphere in one survey. This gives organisations a granular breakdown of which specific dimensions are performing well and which need attention, rather than a single overall satisfaction score.
When to Measure CSAT
CSAT is most powerful when used at specific touchpoints in the customer journey — not as a blanket, always-on measure of overall brand satisfaction. That broader function is better served by NPS. CSAT excels at giving you a fast, targeted read on how specific interactions are performing.
Using Customer Journey Mapping to identify the "moments that matter" in your customer lifecycle helps you decide where CSAT measurement will be most valuable and actionable.
High-value touchpoints
Post-contact centre interaction, post-purchase, after onboarding, after a complaint resolution, or at the end of a support ticket — anywhere a specific experience has just concluded and the customer's reaction is fresh.
Collection channels
Automated post-call IVR surveys, email, SMS, web pop-ups, in-app prompts, and in-person kiosks or touchscreen devices. The channel affects response rates and can introduce bias — keep it consistent where you want to compare data.
Timing is critical
Send CSAT surveys immediately after the interaction while the experience is fresh. Surveys sent hours or days later attract lower response rates and less accurate recall — particularly for contact centre interactions.
💡 CSAT vs NPS — the right tool for the right job
Use CSAT to measure specific interactions and touchpoints. Use NPS to measure overall brand loyalty and the broader relationship. They answer different questions — the best CX programs use both. For more, see the CSAT vs NPS vs CES comparison section below.
Real-World CSAT in Action — Changi Airport
One of the most immediately recognisable examples of in-person CSAT measurement is Changi Airport's Instant Feedback System (IFS) — deployed in restrooms throughout the terminal to give cleaning crews real-time feedback on facility cleanliness. Customers tap one of five buttons (or faces) as they leave, and the data is immediately visible to the cleaning team and supervisors.
It is a perfect illustration of CSAT's core strength: a fast, low-friction question asked at exactly the right moment, tied directly to an operational team that can act on the data immediately. The feedback loop is tight, the question is specific, and the result is actionable.
The lesson for contact centres
The same principle applies to post-call CSAT surveys. A single, simple question asked immediately after the interaction — before the customer moves on — captures the most honest and accurate sentiment. Complexity and delay are CSAT's enemies.
Pros and Cons of CSAT
CSAT is a valuable and widely used metric — but like any single measurement, it has real limitations that CX professionals need to understand before building a measurement program around it.
✓ Strengths of CSAT
- Simple to implement — a single question that any team can deploy quickly
- Actionable feedback — open-text follow-up questions reveal specific, fixable issues
- Touchpoint specific — measures exactly the interaction you want to evaluate
- Benchmarkable — consistent calculation makes it comparable over time and across teams
- Fast signal — immediate post-interaction measurement captures fresh sentiment
✕ Limitations of CSAT
- Limited context — the score alone doesn't explain why customers feel the way they do
- No loyalty link — "satisfied" doesn't necessarily mean loyal or retained
- Survey fatigue — over-surveying reduces response rates and data quality
- Cultural bias — scores vary significantly across cultures and geographies
- Response bias — those at the extremes are more likely to respond
Tips for Effective CSAT Measurement
Collecting CSAT data is straightforward. Collecting CSAT data that is reliable, actionable, and genuinely representative is harder. Here are the practices that separate high-quality CSAT programs from those that generate numbers without generating insight.
Ensure adequate sample size
A CSAT score based on 12 responses is not reliable. Ensure your sample size is representative of the volume and mix of interactions you are measuring before drawing conclusions or making changes.
Ask immediately after the interaction
Timing is critical. Send the CSAT survey as close to the interaction as possible — within minutes for contact centre interactions. The longer you wait, the less accurate the recall and the lower the response rate.
Keep the scale consistent
Choose a scale (1–5 or 1–10) and stick to it. Changing scales or redefining what counts as "positive" mid-program makes trend data worthless and comparisons misleading.
Segment your results
An overall CSAT score hides as much as it reveals. Break results down by contact type, channel, agent team, product, customer segment, and time of day — the patterns in the segments are where the most actionable insights live.
Always include a follow-up question
The score tells you how satisfied customers were. "What is the main reason for your rating?" tells you why — and why is what you need to actually improve. Never collect CSAT without a qualitative follow-up.
Act on the data — close the loop
A CSAT program that produces scores but no action is a vanity exercise. Establish a process for reviewing low scores, following up with dissatisfied customers where appropriate, and systematically using the qualitative feedback to drive improvement.
CSAT vs NPS vs CES — Understanding the Three Core CX Metrics
CSAT, NPS, and CES are the three most widely used CX metrics. They are complementary, not competing — each measures a different dimension of the customer experience. Understanding when to use each is fundamental to building a mature CX measurement program.
CSAT — Satisfaction
Question: "How satisfied were you with [interaction/product]?"
Measures: Satisfaction with a specific touchpoint or experience
Best for: Transactional feedback, contact centre QA, product and service improvement
Expressed as: Percentage of positive responses
NPS — Loyalty
Question: "How likely are you to recommend us to a friend or family member?"
Measures: Overall brand loyalty and advocacy
Best for: Strategic reporting, executive dashboards, overall brand health
Expressed as: Score from -100 to +100
CES — Effort
Question: "How easy was it to resolve your issue today?"
Measures: The ease and friction of the customer experience
Best for: Identifying friction points, self-service design, reducing effort-driven churn
Expressed as: Average score on a 1–7 or 1–5 scale
Which one should you use?
All three — used at the right moments. A mature CX program uses CSAT for transactional touchpoint feedback, NPS for periodic overall brand loyalty measurement, and CES for understanding friction in the contact and resolution experience. Together they give a far more complete picture than any single metric alone.
Frequently Asked Questions About CSAT
What does CSAT stand for?
CSAT stands for Customer Satisfaction Score. It is a metric used to measure how satisfied customers are with a specific interaction, product, or service, typically gathered through a short post-experience survey.
How is CSAT calculated?
CSAT is calculated as: (Number of Positive Responses ÷ Total Responses) × 100. On a 1–5 scale, positive responses are typically those rated 4 or 5. On a 1–10 scale, responses of 8, 9, or 10 are commonly counted as positive. The result is expressed as a percentage — for example, 150 positive responses out of 200 total = 75% CSAT.
What is a good CSAT score?
As a general guide: below 50% indicates significant issues requiring urgent attention; 50–74% is moderate; 75% and above is considered good. However, benchmarks vary significantly by industry — financial services, utilities, and government contact centres typically score lower than retail and hospitality. The most meaningful benchmark is your own trend over time and industry-specific data rather than a universal threshold.
What is the difference between CSAT and NPS?
CSAT measures satisfaction with a specific interaction or touchpoint — "how was that experience?" NPS measures overall brand loyalty and likelihood to recommend — "how do you feel about us overall?" They measure different things and serve different purposes. CSAT is transactional and touchpoint-specific; NPS is relational and brand-level. Both are valuable and are best used together as part of a broader CX measurement framework.
How often should you send CSAT surveys?
CSAT surveys should be triggered by interactions — sent after each relevant touchpoint (post-call, post-purchase, post-support ticket) rather than on a fixed schedule. Survey fatigue is a real risk — avoid sending multiple surveys to the same customer in a short period. Many organisations implement a "survey throttling" rule, limiting how frequently any individual customer receives a survey regardless of how many interactions they have.
Does cultural background affect CSAT scores?
Yes — significantly. Research shows that customers in individualistic cultures (Australia, USA, Germany, Ireland) tend to choose more extreme ratings than those in collectivistic cultures (Japan, China, Korea, Mexico). This means a CSAT score from an Australian customer base cannot be directly compared to one from a Japanese customer base without adjustment. Always contextualise CSAT benchmarks within the geographic and cultural context in which they were collected.
Should you use a 5-point or 10-point CSAT scale?
Both are widely used — the most important thing is consistency. A 5-point scale is simpler for customers to complete and typically achieves higher response rates. A 10-point scale offers more granularity but can introduce scoring inconsistency (different customers interpret the midpoints differently). Either works well; choose one and keep it consistent across all touchpoints where you want to compare data.
Can CSAT be used to measure agent performance?
Yes — post-interaction CSAT is one of the most commonly used inputs for contact centre agent quality assessment. However, like AHT, it should never be the sole metric. CSAT for an individual agent is influenced by contact type, queue wait time, product issues outside the agent's control, and customer mood at the start of the interaction. Use agent-level CSAT as a guide for coaching conversations — always pair it with quality review data and broader context.
Where to Next
Summary: CSAT - Customer Satisfaction Score
CSAT is a foundational CX metric that gives organisations a fast, consistent read on how customers felt about a specific interaction or experience. Its simplicity is its strength — one question, immediately after the experience, captures the honest, unfiltered reaction that is most useful for operational improvement.
Used well, CSAT is one of the most actionable metrics in CX — particularly for contact centres, where post-interaction feedback can be tied directly to agent teams, contact types, and specific process failures. Used poorly — as a vanity number collected but not acted on, or interpreted without cultural and contextual awareness — it adds noise without adding value.
The highest-return approach is to pair CSAT with a qualitative follow-up question, segment the results by team and contact type, close the loop with dissatisfied customers where possible, and track trends over time rather than obsessing over a single data point. Alongside NPS and CES, CSAT gives CX leaders a genuinely complete picture of the customer experience and a commercially grounded case for improving it.