Generative AI in CX Definition
ACXPA Glossary Term

Generative AI

Generative AI is the branch of artificial intelligence that produces new content — text, speech, images, video, code — in response to a prompt, rather than just analysing or classifying existing data. It's the technology behind ChatGPT, Claude, Gemini, Copilot and the wave of "AI" features now embedded in nearly every contact centre and CX platform on the market.

It's also the most over-promised technology to hit the customer experience industry in a decade. The honest take: Generative AI is genuinely transformative for some CX problems, useless for others, and actively damaging when deployed without proper guardrails. This page explains the difference.

Why it matters

Gen AI has changed what's economically possible in CX — particularly in agent assist, summarisation, knowledge management and quality assurance.

Why it's tricky

The technology is genuinely capable, but most CX deployments fail because of bad scoping, missing data, unclear ownership and vendor hype — not bad models.

What this guide covers

What Generative AI actually is, where it works in CX and contact centres today, where it doesn't, and how to spot the difference between substance and theatre.

What is Generative AI?

Generative AI is a category of artificial intelligence models that can produce new content in response to a prompt. The output looks human-created — coherent prose, plausible images, working code, natural speech — because the models are trained on enormous volumes of human-created content and have learned the statistical patterns of how that content is structured.

The most common form in CX is the large language model (LLM) — a model trained on text that can generate text. ChatGPT, Claude, Gemini, Llama and similar are all LLMs. There are also generative models for images (Midjourney, DALL-E), audio (ElevenLabs), video (Sora, Veo), and code (GitHub Copilot). In customer experience contexts, "Generative AI" almost always means an LLM doing something with text or speech.

Plain English

A Generative AI model is a piece of software that has read most of the public internet, learned the patterns of language, and can now write something new when you ask it to. It doesn't "understand" what it's saying in the way a human does — it predicts the most plausible next word, given everything it has read. That's why it can sound brilliant and confidently wrong in the same sentence.

What Generative AI IS

  • A pattern-matching system that produces new content in response to a prompt
  • Genuinely useful for summarisation, drafting, classification and reformatting
  • A productivity multiplier for knowledge work — including most contact centre tasks
  • Confidently wrong some of the time — it does not know when it's hallucinating
  • Only as good as the data it was trained on, plus what you give it at the prompt

What Generative AI IS NOT

  • "Intelligent" in any human sense — it has no understanding, only patterns
  • Reliable on facts unless grounded in a trusted data source
  • A drop-in replacement for properly trained agents
  • The same thing as "AI" — older AI like predictive analytics is not generative
  • A solution looking for a problem — deploy it for a real use case or not at all

How Generative AI works (the short version)

You don't need a computer science degree to deploy Generative AI in a CX environment, but you do need to understand the basics — because every limitation you'll hit comes from how these models actually work.

1

Training

The model is trained on a vast dataset of text (and sometimes images, audio, code). During training, it learns statistical patterns — which words tend to follow which words, which phrases occur in which contexts, how documents are structured.

2

Prompting

You give the model an input — a question, an instruction, a piece of text to summarise. This is the prompt. The quality and clarity of the prompt has an enormous effect on the output.

3

Generation

The model generates a response one token (roughly one word fragment) at a time, predicting the most plausible next token given everything that came before. There's no "lookup" or "fact check" — just probability-weighted prediction.

4

Grounding (optional but critical for CX)

Production CX systems add a layer that retrieves information from your trusted documents (knowledge base, policy library, CRM) and injects it into the prompt. This is called RAG — Retrieval-Augmented Generation — and it's the difference between a useful agent assist tool and a confident liar.

Why this matters for CX

Every Gen AI failure mode in customer service traces back to step 3. The model is predicting plausible text, not retrieving facts. Without proper grounding (step 4), it will happily invent a refund policy, a returns timeframe or a product feature that doesn't exist — and sound completely confident doing it.

Why Generative AI matters in CX

Generative AI changes what's economically possible in customer experience. Things that used to require a human — drafting a personalised email, summarising a 20-minute call, scoring quality across thousands of interactions, answering a customer at 2am — can now be done at near-zero marginal cost. That doesn't mean they should be, but it changes the conversation.

For CX leaders

Gen AI raises the ceiling on personalisation and the floor on consistency. Done well, every customer gets a tailored, on-brand response. Done badly, every customer gets the same hallucinated answer at 10x the speed.

For contact centre leaders

Real, measurable productivity gains in summarisation, agent assist, QA and knowledge search — when scoped to internal-facing use cases first. The risk is reaching for customer-facing automation before the foundations are in place.

For service designers

Gen AI gives you the ability to design adaptive journeys that respond to context — but it also gives you the ability to ship broken journeys at scale. The discipline of mapping journeys end-to-end matters more, not less, when AI is in the loop.

Applications across customer experience

Generative AI shows up across the entire customer lifecycle. Some of these are mature and worth deploying today; others are still finding their footing. We'll get to the contact centre specifics in the next section — these are the broader CX applications you'll see referenced in vendor demos, board papers and industry conferences.

💬

Conversational assistants

The chatbots of the LLM era — capable of natural conversation, multi-turn reasoning and tool use. Far better than the rule-based bots of 2018, but only when properly grounded in your knowledge base.

📝

Content and copy generation

Drafting marketing emails, knowledge articles, FAQs, product descriptions, social posts. Most useful as a productivity tool for human writers — not as an unsupervised content factory.

🎯

Personalisation at scale

Generating tailored offers, recommendations and journeys based on customer data. Genuinely powerful, but only as good as the data underneath — and the consent framework around it.

🔍

Voice of Customer analysis

Summarising thousands of customer comments, support tickets, NPS verbatims or social mentions into themes and trends. One of the strongest current use cases — Gen AI is excellent at finding patterns in unstructured text.

Generative AI in contact centres

This is where Gen AI is having the biggest operational impact today. Contact centres run on text and speech — calls, chats, emails, tickets, notes — and Gen AI is purpose-built for that kind of work. Below are the use cases that are actually shipping in Australian contact centres right now, ranked roughly by maturity.

1

Call and chat summarisation

The single most reliable Gen AI use case in contact centres. The model takes a call transcript or chat log and produces a structured summary — what the customer wanted, what was done, what's outstanding. Typically saves 30–90 seconds of after-call work per interaction. Internal-facing, low risk, immediately measurable.

2

Agent assist and next-best-action

Real-time prompts surfacing knowledge base articles, suggested responses, compliance reminders and de-escalation cues during a live conversation. Properly grounded, this can dramatically reduce ramp time for new agents and improve consistency. Watch for cognitive overload — if agents are reading prompts instead of listening, you've made it worse.

3

Automated quality assurance

Gen AI can score 100% of calls against your QA framework, surface coaching opportunities, and flag compliance risks. The mature use case is augmenting human QA teams — letting them focus on the highest-value calls while the AI handles coverage. The immature use case is replacing human QA entirely; the bias and consistency issues there are real.

4

Knowledge base search and answer generation

Replacing keyword search with natural-language Q&A grounded in your knowledge base. Internal-facing first (agents asking the system), customer-facing later (customers asking the chatbot). The internal version is genuinely useful today; the customer-facing version requires real investment in content quality and guardrails.

5

IVR and voice bots

Conversational IVR powered by LLMs is improving rapidly — much better than the press-1-press-2 menus of the 2010s. Still not at the point where most customers prefer it to a human for anything non-trivial, and the cost-per-interaction maths often doesn't pencil out yet at Australian wage rates.

6

Customer-facing chat (autonomous)

Fully autonomous customer-facing AI chat — no human in the loop. The riskiest use case and the one most often pitched first by vendors. Possible to do well, but requires mature knowledge management, robust escalation paths, and clear scope. Most failed deployments started here instead of ending here.

The pattern

Internal-facing use cases (summarisation, agent assist, QA, knowledge search) ship reliably and deliver measurable value. Customer-facing autonomous use cases require an order of magnitude more investment to ship safely. Most contact centres get the most value from the internal-facing wins long before they need to think about replacing the agent.

Hype vs reality — separating substance from theatre

The CX vendor market is currently saturated with "AI" claims. Some are real product capabilities; some are a thin wrapper over an LLM API call; some are pure marketing. A few honest tests for distinguishing the three:

"Powered by AI" with no specifics

If a vendor can't tell you which model, what it's grounded on, how it handles hallucinations and what happens when it gets it wrong — the AI capability is probably superficial. Ask for the failure mode, not the demo.

Demos that only show ideal cases

A 90-second demo with a clean question and a perfect answer tells you nothing. Ask the vendor to show you the system handling an angry customer with an edge-case query and incomplete account data — that's where the model either holds up or falls apart.

"Replaces your contact centre"

No deployment in Australia in 2026 has actually done this. Pilots that claimed to are either still piloting or have quietly added humans back into the loop. Treat the claim as marketing, not a roadmap.

ROI projections without baseline data

"40% AHT reduction" and "75% deflection" are recurring vendor numbers with very little provenance. If you can't see the methodology, the cohort, the time period and the comparison baseline — it's a sales talking point, not evidence.

The honest version: the gap between "we have an AI feature" and "we have an AI capability that actually works in production for our customers" is enormous. Most vendor pitches sit firmly in the first category. Verify before you buy.

The ACXPA Standards lens

ACXPA's editorial position on Generative AI in CX is simple: the technology must serve the customer outcome, not just the cost line. A deployment that reduces operational cost while degrading the experience is not a CX win — it's a P&L improvement that will eventually show up as churn, complaints, regulatory attention or reputational damage.

Anchored to the ACXPA CX Standards, that means a few non-negotiables:

🎯

Customer outcome first

Every AI deployment must be evaluated on whether it improves the customer's outcome — not just on whether it reduces cost or AHT. A faster bad experience is still a bad experience.

🔍

Transparency

Customers should know when they're interacting with AI, especially when the conversation has consequences (account changes, complaints, financial decisions). "Pretending to be human" is not a strategy — it's a complaint waiting to happen.

🛟

Easy escape to a human

If the AI is failing the customer, getting to a human must be obvious and immediate. "Are you a robot?" should never be a conversation the customer has to win.

📚

Grounded in trusted sources

Customer-facing AI must be grounded in your authoritative content — policies, knowledge base, product information — not just the underlying LLM's training data. Ungrounded AI confidently invents answers.

⚖️

Accountability

If the AI gets it wrong, your business is accountable — not the vendor, not the model. "The AI told the customer the wrong thing" is not a defence in a complaint, a regulator's office or a court.

🔒

Privacy by design

Customer data going into a Gen AI system needs the same care as any other PII. That includes prompt logs, model fine-tuning data, and any third-party APIs in the data flow. Read the data residency clauses.

Common pitfalls

Most failed Gen AI deployments in Australian CX share a small number of root causes. None of them are about the technology itself.

Starting customer-facing

The riskiest use case (autonomous customer-facing chat) is also the one most often piloted first. The internal wins (summarisation, agent assist, QA) are larger, lower-risk, and build the operational maturity needed for customer-facing later. Reverse the order and the project usually fails.

No grounding strategy

Plugging a generic LLM into a customer channel without retrieval grounding is asking it to make things up. Hallucinations in CX are not theoretical — they show up as misquoted policies, invented products and refunds the business never offered.

Treating it as an IT project

Gen AI in CX is a content, knowledge management, change management and training problem dressed up as a technology project. If your CX team isn't deeply involved in design and ongoing tuning, the model has nothing good to learn from.

Measuring inputs, not outcomes

"We deflected 60% of contacts" is not the same as "we resolved 60% of customer issues". Deflection and containment metrics measure how often the AI ended the conversation — not how often the customer got what they needed. The right metrics are CSAT, FCR, repeat-contact rate and complaint rate.

No human-in-the-loop for high-risk decisions

Gen AI making decisions on refunds, complaints, financial hardship, vulnerable customer flags — without a human gate — is a regulatory and reputational landmine. The cost saving is not worth the downside risk.

Chasing the productivity number, ignoring the experience

Most public AI-in-CX case studies lead with productivity gains — handle time, deflection, cost-per-contact. The CX measures are conspicuously absent. If a deployment makes operations cheaper while making the experience worse, you've built a churn engine.

The pattern: the vast majority of failed Gen AI deployments in CX are not failures of the technology — they're failures of scoping, governance, content quality, and a leadership team that wanted the headline more than the result.

How to scope a Generative AI deployment

If you're considering Gen AI for any CX or contact centre use case, this is the rough sequence that separates the deployments that ship from the ones that quietly disappear after the pilot.

1

Pick a problem worth solving

Start with a real, measurable operational problem — long after-call work, slow knowledge search, inconsistent QA coverage, ramp time for new agents. Not "let's do something with AI". Tools should serve problems, not the other way around.

2

Audit your content

Gen AI is only as good as the knowledge you ground it on. Out-of-date policies, conflicting articles, missing edge cases — your knowledge base problems become AI problems instantly. Fix the content first, deploy the AI second.

3

Start internal-facing

Agent-facing tools (summarisation, knowledge assist, draft responses, QA scoring) before customer-facing tools. Lower risk, faster value, and they build the muscle you need for customer-facing later.

4

Define what good looks like — including CX measures

Set clear success metrics before you deploy. Productivity numbers (AHT, ACW, deflection) and experience numbers (CSAT, FCR, complaint rate) — both, every time. If the deployment moves productivity in the right direction and experience in the wrong direction, you don't have a win.

5

Plan the failure modes

What happens when the model is confidently wrong? When it's offline? When it leaks data? When a customer tries to jailbreak it? Every production Gen AI deployment needs a documented answer for each. "It won't happen" is not an answer.

6

Pilot, measure, iterate

Run a tightly scoped pilot. Measure it against your defined success criteria — not against the vendor's. If it works, expand the scope. If it doesn't, kill it without sentiment. The sunk cost of the pilot is far smaller than the sunk cost of a year defending a deployment that doesn't work.

How to know it actually worked

The honest measure of a Generative AI deployment is whether it made the customer experience better at a sustainable cost — not just whether it reduced operational expense. The metrics below are the ones that survive scrutiny when the deployment is reviewed twelve months in.

Productivity (necessary but not sufficient)

Handle time, after-call work, agent ramp time, deflection rate. These need to move in the right direction or the project doesn't have an operational case. But they are not the whole picture.

Experience (the half most people skip)

CSAT, FCR, complaint rate, repeat-contact rate, escalation rate. If these go backwards while productivity goes forward, the deployment is making the experience worse — and the savings will be eaten by churn and complaints.

Quality and risk

Hallucination rate (sampled human review), policy compliance, escalation-to-human success rate, data privacy incidents. These are the metrics that protect you when the regulator, the board or the journalist comes asking.

The honest test

If you can't show that customer experience metrics held or improved alongside the productivity gains — you haven't proven the deployment worked. You've only proven it was cheaper. Those are different claims.

Frequently Asked Questions

Is Generative AI the same as AI?

No. AI is a broad field that includes predictive analytics, computer vision, machine learning classifiers, and many other techniques. Generative AI is the specific subset that produces new content. Most "AI" in contact centres before 2023 was not generative — it was classification (intent detection, sentiment scoring) or prediction (call volume forecasting). They're different tools with different use cases.

Will Generative AI replace contact centre agents?

Not in the way the headlines suggest. The honest current picture: Gen AI is automating parts of agents' workflows (summarisation, knowledge search, draft responses) — making good agents more productive — and is starting to handle some simple, well-scoped customer-facing interactions autonomously. What it isn't doing, in any deployment we've seen, is replacing the experienced agent for complex, emotional or high-stakes conversations. Roles will change. Headcount may compress. But the "no humans" contact centre remains marketing, not reality.

Is "AI hallucination" a real risk in CX, or just a tech-press story?

Real and material. Without proper grounding, LLMs invent policies, products, refund timeframes and procedures with complete confidence. In customer service contexts that translates directly into misinformation, complaints and potential regulatory exposure. The fix isn't "wait for better models" — it's retrieval grounding (RAG), guardrails, and human-in-the-loop on high-risk decisions. Treat hallucination as a design problem, not a future problem.

Should we tell customers they're talking to AI?

Yes — and it's increasingly likely to become a regulatory requirement. Beyond compliance, it's just good practice: customers who realise mid-conversation that they've been talking to an AI when they thought they were talking to a human escalate harder and complain louder. Be upfront. Most customers are fine with AI-handled interactions for routine queries — what they object to is being deceived about it.

Is there a "best practice" deflection or containment rate to aim for?

No, and we'd push back on the framing. Deflection and containment measure how often the AI ended a conversation — not how often it solved the customer's problem. A 70% containment rate where 30% of those customers come back through another channel because they didn't get what they needed is a worse result than a 40% containment rate where the contained interactions were genuinely resolved. Measure resolution, not deflection.

Where does Gen AI fit in our existing technology stack?

Most of the major contact centre and CX platforms now embed Gen AI features natively (NICE, Genesys, Salesforce, Zendesk, Five9 and so on). For most organisations, the right starting point is enabling and tuning the features in the platforms you already own — not procuring a separate AI vendor. Build-vs-buy-vs-enable is a real decision; "enable what you have" is usually the right first move.

What about data privacy and where customer data goes?

This is the area most often glossed over in vendor pitches. Critical questions: where is the data processed (Australia, US, EU)? Is customer data used to train models? Are prompts and responses logged, and for how long? What happens at contract termination? Read the data residency and training-data clauses carefully — this is where the difference between vendor-grade and enterprise-grade products shows up.

How do we measure return on investment?

By comparing pre- and post-deployment numbers across both productivity and experience metrics, not just productivity. Cost savings + maintained or improved CSAT/FCR/complaint rate = real ROI. Cost savings + degraded CX metrics = a P&L improvement that will be paid back in churn. The honest ROI calculation is twelve months out, not three.

Where to next

📞

Call Centre Hub

The home of contact centre frameworks, calculators and resources — the place where most Gen AI deployments live in practice.

Go to Call Centre Hub
🌟

CX Hub

Frameworks, standards and resources for designing customer experience — where Gen AI must serve the outcome, not the cost line.

Go to CX Hub
🤖

CX Automation Suppliers

The ACXPA Supplier Directory listing for CX Automation vendors — independent, practitioner-vetted, and the right place to start a shortlist instead of a Google search.

View CX Automation Suppliers
☎️

Call Centre Technology Suppliers

The ACXPA Supplier Directory listing for contact centre technology — the platforms most Gen AI features ship inside today.

View Call Centre Technology Suppliers

Become an ACXPA Member

ACXPA members get full access to expert-led roundtables — including searchable transcripts of past discussions on Generative AI in CX and contact centres — plus the Member Bytes video library, premium downloads, exclusive Australian Call Centre Rankings data, 25% off CX Skills training, and a community of CX and contact centre practitioners cutting through the hype together.

📞

Call Centre Hub

The home of contact centre frameworks, calculators and resources — the place where most Gen AI deployments live in practice.

Go to Call Centre Hub
🌟

CX Hub

Frameworks, standards and resources for designing customer experience — where Gen AI must serve the outcome, not the cost line.

Go to CX Hub
🤖

CX Automation Suppliers

The ACXPA Supplier Directory listing for CX Automation vendors — independent, practitioner-vetted, and the right place to start a shortlist instead of a Google search.

View CX Automation Suppliers
☎️

Call Centre Technology Suppliers

The ACXPA Supplier Directory listing for contact centre technology — the platforms most Gen AI features ship inside today.

View Call Centre Technology Suppliers

Upgrade your ACXPA Membership

Hi , you've already got a free ACXPA account — upgrading unlocks expert-led roundtables with searchable transcripts of past Gen AI discussions, the Member Bytes video library, premium downloads, exclusive Australian Call Centre Rankings data, and 25% off CX Skills training.

🎙️

Call Centre Roundtables

Hi — Generative AI has come up repeatedly across Call Centre Roundtables. Use the searchable transcripts and key moments to find what your peers are actually shipping (and what they've quietly killed).

Search CC Roundtables
🎙️

CX Roundtables

Gen AI in CX has been a recurring theme on the CX Roundtables — practitioner-level discussion, not vendor pitches. Use the transcript search to find specific deployments and lessons learned.

Search CX Roundtables
🤖

CX Automation Suppliers

The ACXPA Supplier Directory listing for CX Automation vendors — independent, practitioner-vetted, and the right place to start a shortlist instead of a Google search.

View CX Automation Suppliers
☎️

Call Centre Technology Suppliers

The ACXPA Supplier Directory listing for contact centre technology — the platforms most Gen AI features ship inside today.

View Call Centre Technology Suppliers

Training reminder

As an ACXPA member you receive 25% off all CX Skills training courses — including independent, vendor-agnostic content on contact centre operations, CX management and how to evaluate technology claims. View Contact Centre Training Courses.

Summary

Generative AI is the most genuinely transformative technology to hit customer experience in a decade. It is also the most over-promised, mis-deployed and badly scoped. Both things are true at the same time, and the gap between them is where the failed projects live.

The honest framing: Gen AI is real, useful, and shipping in Australian contact centres today — particularly for internal-facing use cases like summarisation, agent assist, knowledge search and quality assurance. The customer-facing autonomous applications are improving rapidly but require an order of magnitude more investment in content quality, guardrails and escalation design before they're ready for the front line.

The deployments that succeed are the ones scoped against a real operational problem, grounded in trusted content, anchored to both productivity and experience metrics, and aligned to the ACXPA CX Standards principle that the customer outcome comes first. The deployments that fail are the ones chasing the headline. Pick the first kind.

0 Comments

Leave a reply

ACXPA PLATINUM SPONSORS

ACXPA Platinum SPONSORS
ACXPA SILVER SPONSORS
ACXPA Platinum SPONSORS
ACXPA BRONZE SPONSORS
ACXPA Platinum SPONSORS
ACXPA Platinum SPONSORS
Copyright © 2026 | Australian Customer Experience Professionals Association | Website Terms of Use | Privacy Policy

Log in with your email address

or Become an ACXPA Member

Forgot your details?

Create Account