Customer Effort Score - CES: What It Is, How It Works, and How to Improve It
Customer Effort Score (CES) is a customer experience metric that measures how easy or difficult it was for a customer to interact with your business to achieve their goal. Developed by the Corporate Executive Board (now part of Gartner) and popularised by a landmark 2010 Harvard Business Review article — "Stop Trying to Delight Your Customers" — CES challenged the prevailing wisdom that organisations should focus on exceeding customer expectations.
The research behind CES revealed something surprising: delighting customers does not actually build loyalty. What does build loyalty is making things easy. Reducing the effort a customer has to expend to resolve an issue, make a purchase, or get information is the most reliable driver of customer loyalty — and CES is the metric designed to measure it.
This guide covers everything you need to know about CES — what it is, the different survey formats, how to calculate it, when to use it, what a good score looks like, and how to improve it — including how it fits alongside NPS and CSAT as part of a complete CX measurement framework.
What CES measures
How easy it was for a customer to achieve their goal — interact with support, make a purchase, resolve an issue — expressed as a score on a consistent scale.
Why it matters
Low customer effort is the strongest predictor of customer loyalty. CES is 1.8x more predictive of loyalty than CSAT and 2x more predictive than NPS.
What this guide covers
The CES definition, why it matters, survey formats, question examples, three calculation methods, when to measure, what a good score looks like, and how to improve it.
What is Customer Effort Score?
A Customer Effort Score (CES) is a customer experience survey that measures how easy it was for a customer to interact with your business and achieve their desired outcome. The "effort" being measured is the level of work the customer had to put in — how many steps, how many contacts, how much friction — to get what they needed.
CES was developed by the Corporate Executive Board (now Gartner) and gained widespread attention following a Harvard Business Review article in 2010 that challenged the conventional wisdom around customer delight. The research showed that customers who had to expend high effort were significantly more likely to become disloyal — even if they were ultimately satisfied with the outcome.
CES is one of the three core CX metrics alongside Net Promoter Score (NPS) and CSAT (Customer Satisfaction Score). Each measures a different dimension of the customer experience, and the most mature CX programs use all three in combination.
In plain English
CES asks: "How easy was that?" It measures the friction in the customer experience — and friction is the enemy of loyalty.
Why Customer Effort Score Matters
The research underpinning CES produced some of the most compelling statistics in customer experience. The core finding — that reducing effort is more powerful for building loyalty than delighting customers — has fundamentally shaped how leading organisations think about service design.
Reduced repeat contacts
Organisations that reduce customer effort see a 40% reduction in repeat calls — customers don't have to call back because their issue was resolved completely the first time.
Fewer escalations
Low-effort experiences produce 50% fewer escalations — customers who find interactions easy are less likely to become frustrated and demand manager intervention.
Lower cost to serve
Reduced channel switching (down 54% in low-effort interactions) and repeat contacts translate directly into lower operational costs — a 37% cost reduction in the original research.
The CES Question — How to Frame It
When CES was first introduced in 2010, it used a 5-point scale measuring the effort customers had to expend. Since then, multiple variations have emerged — different scales, different question formats, and different ways to phrase the question. What matters most is consistency: whichever format you choose, stick to it across all the touchpoints you want to compare.
There are two primary ways to frame the CES question:
Statement format
Best suited to Likert and 1–5 scales. Asks customers to agree or disagree with a positive statement about the ease of their experience.
"The company made it easy for me to handle my issue."
Scale: Strongly Disagree → Strongly Agree
Direct question format
Better suited to 1–10 and emoji scales. Asks customers directly how easy the experience was.
"How easy was it to resolve your issue today?"
Scale: Very Difficult → Very Easy
💡 Best practice for CES questions
Use a neutral tone — the question must be objective. Avoid using the word "effort" in the question itself. Be specific — ask about one interaction or touchpoint, not the brand overall. Segment your audience — new customers may need different questions to long-term ones. And always optimise for mobile — more customers respond on their phone than ever before.
CES Question and Statement Examples
There is no limit to how you can adapt the CES question to your specific context. Here are some of the most widely used examples across different touchpoints.
Question examples
- How easy was using [Product] so far?
- Overall, how easy was it to solve your problem today?
- How easy was it to find the information you wanted on our website?
- How easy was it to interact with our team?
- Were the instructions during onboarding easy to follow?
Statement examples
- "[Product] made it easy to handle my issue."
- "[Product] made it easy for me to use the [X] feature."
- "Our support rep made it easy for me to resolve my issue."
- "The company made it easy to make a purchase."
- "The company made it easy for me to get the information I needed."
Types of Customer Effort Score Surveys
There are four main survey formats used to measure customer effort. All are valid — what matters is choosing one and applying it consistently. As long as you can reliably determine the level of effort the customer expended, the specific format is secondary.
1. Likert Scale (7-Point)
The Likert scale uses a 7-point answer scale, typically colour-coded from red (strongly disagree) to green (strongly agree). It is the most granular of the CES formats and is well-suited to the statement format question.
2. 1 to 10 Scale
On a 1–10 CES scale, a low score is considered good as it indicates low effort. This is the inverse of NPS scoring — lower is better. Best used with the direct question format.
3. 1 to 5 Scale
Similar to the Likert scale but with fewer options, the 1–5 scale is simpler to complete and often achieves higher response rates. Can be used as either a question or statement.
4. Emoji / Emoticon Scale
Rising in popularity, the emoji scale is typically 3–5 points and is particularly effective for mobile and in-person kiosk surveys. Visually intuitive and fast to complete — though limited in granularity beyond 5 points.
How to Calculate Customer Effort Score
Because CES uses multiple survey formats, there are three commonly used calculation methods. The right method depends on which survey type you are using. Choose one and apply it consistently — mixing methods makes your data impossible to trend.
Method 1 — Average Score
The simplest approach. Add up all the scores received and divide by the number of responses. Best suited to the Likert and 1–10 scale formats.
💡 Example
100 responses with a total score of 650 = CES of 6.5. On a 1–10 scale where lower is better, this would indicate moderate-to-high effort.
Method 2 — Percentage of Positive Responses
Count only the positive responses (those you define as easy/low effort) and express them as a percentage of all responses. Best suited to the 1–5 scale and emoji formats.
💡 Example
80 positive responses out of 100 = CES of 80%. You must clearly define what counts as a positive response — for Likert, this might be "Agree" and "Strongly Agree" only, or all three positive options. Define it once and stick to it.
Method 3 — Positive Minus Negative
Similar in concept to the NPS calculation. Subtract the percentage of negative responses from the percentage of positive responses. Neutral responses are excluded from both groups.
💡 Example
500 respondents on a 1–5 scale: scores of 4 and 5 (positive) = 300 responses = 60%. Scores of 1, 2 and 3 (neutral/negative) = 200 responses = 40%. CES = 60% − 40% = 20%.
Adding the Open-Ended Follow-Up Question
The CES score tells you how much effort the customer had to expend. It does not tell you why. Without understanding the reason behind the score, you cannot identify what to fix. Always pair your CES question with an open-ended follow-up — even a single optional text field adds enormous diagnostic value.
This single additional question transforms CES from a tracking metric into an actionable diagnostic tool. The qualitative responses surface the specific friction points customers are experiencing — slow systems, confusing processes, unclear information, unnecessary steps — giving you the insight you need to prioritise improvements.
Close the loop
If customers have taken the time to give you detailed feedback, take the time to acknowledge it. Where possible, close the loop with the customer — let them know how their feedback has led to a change or improvement. This builds trust and significantly increases the likelihood they will respond to future surveys.
When to Conduct a Customer Effort Score Survey
CES is most powerful when measured immediately after a specific interaction — while the experience is fresh and the customer can accurately recall how much effort they had to put in. The key is immediacy and specificity: ask about this interaction, right after it happened.
After a service interaction
Post-call, post-chat, or post-email survey immediately after the customer contact ends. One of the most common CES use cases in contact centres.
After a purchase
Survey immediately following a transaction — measuring the ease of the buying process, checkout flow, or onboarding experience.
After onboarding
Once a customer completes onboarding for a product or service — measuring how easy the setup or activation process was to navigate.
After a cancellation
Capturing effort scores at cancellation provides valuable data about friction in the offboarding or exit process — and can inform retention strategies.
CES vs NPS for relationship measurement
CES is a transactional metric — it measures specific interactions. For measuring overall brand relationship and loyalty, NPS is more appropriate. Some organisations have begun using CES as a broader relationship metric, but this is outside its original design intent and arguably better served by NPS.
What is a Good Customer Effort Score?
Benchmarking CES is genuinely difficult — and this is one of CES's known limitations. Because there are multiple question formats, scales, and calculation methods, comparing your CES score directly to another organisation's is rarely valid unless you are using the exact same methodology.
What matters far more than hitting an arbitrary benchmark is establishing your own baseline and trending it over time. The questions to ask are:
Is your score trending up?
If CES is improving over time, your friction reduction efforts are working. This is the most meaningful signal — direction matters more than absolute value.
Is your score trending down?
A declining CES score is a warning signal that friction is increasing somewhere in the customer journey — something has changed and customers are finding it harder.
Which touchpoints score low?
Segment CES by contact type, channel, product, and team. Low scores at specific touchpoints reveal exactly where to focus improvement effort.
How does CES correlate with loyalty?
Connect CES data to retention, repeat contact rates, and NPS. High-effort touchpoints that correlate with churn or repeat contacts are your highest-priority targets.
How does it compare to CSAT?
Pairing CES with CSAT at the same touchpoints can reveal gaps — satisfied customers who nevertheless found the experience effortful are at risk of not returning.
What are customers telling you?
The open-text responses are your most valuable data. A moderate CES score with rich qualitative feedback pointing to one specific pain point is more actionable than a perfect score with no context.
How to Improve Your Customer Effort Score
The core principle is simple: make things easier for your customers. In practice, this means systematically identifying and removing the friction points in the customer journey — the steps, obstacles, and inefficiencies that force customers to expend more effort than they should need to.
Empower frontline staff to resolve issues completely
Contact centre agents and customer service staff who are empowered to resolve issues at first contact — without escalation, without transfers, without asking customers to call back — are the single biggest lever for reducing effort. Focus on outcome, not productivity metrics.
Improve self-service channels
Customers who can find answers themselves without needing to contact your organisation are experiencing zero effort. Investing in genuinely useful self-service — FAQs, knowledge bases, chatbots that actually resolve rather than deflect — reduces overall contact volume and customer effort simultaneously.
Minimise channel switching
Customers who start an interaction in one channel and are forced to switch to another (call the contact centre because the chat couldn't resolve it; email because the call didn't fix it) experience compounding effort. Reduce the need to switch by improving resolution rates in each channel.
Reduce response times
Waiting — in a queue, for an email reply, for a chat agent — is effort. Every minute a customer waits adds to their perceived effort score. Reducing average speed of answer and email/chat response times directly improves CES.
Prevent the next issue — not just the current one
One of the most powerful effort reduction strategies is proactive service — anticipating what the customer will need next and addressing it before they have to contact you again. A contact centre agent who resolves the current issue and flags a likely next issue is delivering genuine low-effort service.
Act on the qualitative feedback systematically
Review open-text responses regularly for recurring themes — the same friction point mentioned by 30 customers represents a systemic issue, not a one-off. Prioritise fixes by frequency and customer impact, and track whether CES improves after each change is implemented.
CES vs NPS vs CSAT — The Three Core CX Metrics
CES, NPS, and CSAT each measure a different dimension of the customer experience. They are complementary tools, not competing alternatives. The most mature CX programs use all three — at the right moments, for the right purposes.
CES — Effort
Question: "How easy was it to resolve your issue?"
Measures: Friction in the customer experience
Best for: Post-interaction, self-service, contact centre, onboarding
Strongest predictor of: Customer loyalty and repeat contact
CSAT — Satisfaction
Question: "How satisfied were you with [interaction]?"
Measures: Satisfaction with a specific touchpoint
Best for: Transactional feedback, QA, product improvement
Strongest predictor of: Immediate satisfaction and repeat purchase
NPS — Loyalty
Question: "How likely are you to recommend us?"
Measures: Overall brand loyalty and advocacy
Best for: Strategic reporting, executive dashboards, brand health
Strongest predictor of: Long-term retention and revenue growth
Which one should you use?
All three — used at the right moments. CES for transactional effort measurement after interactions. CSAT for immediate satisfaction at touchpoints. NPS for periodic overall loyalty measurement. Used together, they give you a complete and commercially grounded picture of the customer experience that no single metric can provide alone.
Frequently Asked Questions About Customer Effort Score
What does CES stand for?
CES stands for Customer Effort Score. It is a customer experience metric that measures how easy or difficult it was for a customer to interact with a business and achieve their desired outcome — resolve an issue, make a purchase, complete an onboarding, or get information.
How is CES different from CSAT?
CSAT measures how satisfied a customer was with an interaction — it captures overall sentiment. CES measures specifically how easy the interaction was — it captures friction. A customer can be satisfied with an outcome but still find the process effortful. CES identifies the effort; CSAT identifies the satisfaction. Both are valuable and measure different things.
Is a high or low CES score better?
It depends on the scale and calculation method used. On the 1–10 scale where you measure effort directly, a lower score is better (less effort). On the Likert scale where you measure agreement with "the company made it easy," a higher score is better (more agreement). On the percentage-based methods (Method 2 and 3), a higher percentage is better. This is why you must clearly define your methodology — and why comparing CES scores across organisations that use different methods is unreliable.
Why is CES more predictive of loyalty than NPS?
The Gartner research found that the presence of high-effort service interactions reliably drives customers away — more so than the absence of delightful ones. Put simply: customers who have a hard time don't come back. NPS reflects overall brand sentiment which is influenced by many factors. CES captures the specific friction in service interactions that most directly affects the decision to stay or leave. For contact centres in particular, CES is a more actionable predictor of loyalty because it points directly to operational improvement opportunities.
When should you send a CES survey?
Immediately after the interaction you want to measure — ideally within minutes of the contact ending or the transaction completing. The longer you wait, the less accurate the customer's recall and the lower the response rate. Post-call IVR surveys, triggered email surveys, and in-app prompts are all effective channels for immediate CES collection.
What is a good CES benchmark?
Benchmarking CES is genuinely difficult because of the multiple survey formats and calculation methods in use. A CES score calculated with Method 1 (average) cannot be meaningfully compared to one calculated with Method 2 (percentage positive). The most useful benchmark is your own historical trend — is CES improving over time? — combined with correlating CES to business outcomes like repeat contact rates, churn, and NPS. Within-industry comparisons are only valid when the same methodology is used.
Can CES replace NPS or CSAT?
No — CES measures a different dimension of the customer experience and is best used alongside NPS and CSAT, not instead of them. CES captures interaction-level effort. CSAT captures interaction-level satisfaction. NPS captures overall brand loyalty. Each answers a different question, and organisations that use all three have a far more complete picture than those relying on any single metric.
What are the limitations of CES?
CES has several well-documented limitations: it does not capture the overall customer relationship (only a specific interaction); it does not explain why the customer found the experience effortful without an open-text follow-up; benchmarking is difficult due to format variation; it is less suited to complex or emotional interactions where "ease" is not the primary driver of loyalty; and it does not segment by customer type or relationship stage. Like all single metrics, it should be used in combination with other measures rather than in isolation.
Where to Next
Summary: Customer Effort Score
Customer Effort Score is one of the most actionable metrics in CX — because reducing customer effort is the most reliable driver of loyalty. The research is clear: customers who find interactions difficult become disloyal at much higher rates than those who find them easy. And the operational benefits of reducing effort — fewer repeat contacts, fewer escalations, lower cost to serve — are directly measurable.
CES is not a replacement for NPS or CSAT — it measures a different dimension of the customer experience. Used together, all three give organisations a complete and commercially grounded view of how customers feel about them, how loyal they are likely to be, and where the friction is that is driving them away.
The organisations that get the most from CES are those that go beyond tracking the number — pairing it with open-text follow-ups to understand the why, segmenting by touchpoint to identify where friction is highest, closing the loop with customers who take the time to give feedback, and systematically acting on what they hear. That is what transforms CES from a metric into a genuine improvement engine.