#10 AI customer support works... until it doesn’t.

Chatbots are more common and capable than ever, and the line between human and automated services is becoming increasingly blurred. 

A recent study by Zendesk found that 50% of consumers think AI agents can be empathetic when addressing concerns. But AI often hides the seams in services and sacrifices transparency at the altar of ‘efficiency’. 

This month I’ve been researching and thinking a lot about AI-enabled customer support, especially chat-based services. It’s a hot topic and is something we’ve worked on in the past at IF for complex organisations like Telus. This month, we’ve added 3 new design patterns to our catalogue and written a blog post to help teams build better customer support services enabled by AI.

Designing chatbots so they adapt to content and context makes services more accessible and inclusive. Research shows that timely support for those who need it most drastically improves outcomes. 

I myself had an awful experience last year during a family emergency when seeking support from a hybrid AI/human service (more on that in my blog post). A service that adapted better to my needs would have made an immeasurable difference to my experience at the time.

AI will undoubtedly reduce companies’ cost to serve and make customer support more efficient by enabling more people to self-serve. However, it’s clear that AI-enabled chatbots need to be better designed, with careful consideration for the needs and contexts of people using them. 

Our 3 new patterns will help people to design AI support services in ways that build trust and serve the people using them.

Half standard heart emoji, half pixelated heart with a crosshair.

Is AI capable of caring like a human?

We are Projects by IF. We help our clients move faster, with less risk, by creating products and services that earn and maintain trust. We help our clients do 3 things:

- Grow in new markets.
- Deepen customer relationships.
- Derisk innovation.

What we’ve been up to in the studio

Testing & learning for cities

In the past few weeks, we’ve been helping cities on the Bloomberg Philanthropies City Data Alliance (CDA) AI Track to test their AI-enabled service prototypes. As “Test & learn” becomes more prominent in Public Sector, we’re trying first-hand what it takes to move fast, iterate, test in the wild while also looking at risk, trust, and accountability.

Are your GenAI tools creating the outcomes you want? 

Or producing harmful outputs? Or inadvertently sharing sensitive data? Or behaving in unexpected ways? We are working with a UK public organisation to define how feedback and monitoring should work for their AI-enabled systems - and how that changes the user experience, the system’s technical requirements and the operational constraints.

What we’ve been reading

A research paper from Nvidia researchers proposed that Small Language Models are the future of agentic AI. Large Language Models are lauded for their ability to exhibit near-human performance on different tasks and hold a general conversation, but this paper makes the case for smaller models being more suitable, economical and effective in agentic systems. Read the paper.

The Swedish prime minister came under fire for admitting that he regularly consults ChatGPT in his role. This raises lots of questions about the use of sensitive information and how bias in model training data can influence decisions being made by politicians. Read in the Guardian.

The US state of Illinois has banned AI from providing therapy. Lawmakers are grappling with how to protect patients from growing, and mostly unregulated, use of AI in healthcare. Read in Gizmodo.

A comprehensive taxonomy of hallucinations in Large Language Models was published. It’s dense, it’s technical, but it’s extremely detailed and useful. There’s even some data visualisations comparing how different models perform. Read the paper here.

Coming up

This September we’re hosting another breakfast for senior leaders on the topic of ‘AI in a crisis’: how to design teams, products, and decisions that are ready for the moments when AI gets it wrong. We’re almost at capacity, so drop us an email if you’d like to join - [email protected]

Until next time, 

— Ronan and the IF Team

Find us on LinkedIn and Medium for more insights and updates.