#18 A new kind of user

Agents don't bring lived experience to an interaction, but designing for them is still a deeply human-centred challenge.

I recently met a team analysing data on how AI is accessing their site, when something surprised me. The traffic wasn't random noise. It mapped across a human working day: rising in the morning, calming at lunch, tapering off in the evening. Through the data, I could see a very human behaviour.

Agents aren't just the latest wave of technology. They're an extension of a very human need, which makes designing for them a deeply human-centred challenge.

We're already seeing what's at stake in our work at IF, as are design teams in government. When a service is experienced through an agent, the designed intent lands directly, at scale, without a human's ability to interpret or adapt. Agents are a new kind of user that don't bring lived experience to an interaction. When an agent tries to access real-world information, it's often improvising by scraping pages designed for humans and inferring structure that was never made explicit. A person can work around a badly designed interface, but an agent can't. This means the design has to be better. 

In some of our recent work, we've been experimenting with how to design for people and agents together by consciously building the layers through which agents interact with public data and services. It’s great to see other examples of this emerging too: in government data infrastructure, in payments where improvisation is never acceptable, in retail where the experience can be genuinely elevated. 

Organisations that design the agent's experience of their service will get better outcomes than those that leave agents to figure it out alone. AI-enabled products and services will improve not because the model changed, but because the environment did.

This is the challenge the design profession needs to move into with intent. I wrote about what that means for our methods here.

The agent becomes an interface between a person and the product and service they’re using. Photo by Nick Fancher on Unsplash.

For 10 years, IF has been helping large organisations build customer-facing services that scale safely, earn trust from the start and deliver long-term impact. We prototype, test, and launch AI products and services that people believe in and want to adopt, while helping organisations change the way they work in the AI age.

What we’ve been working on

Developing knowledge around why AI adoption commonly fails, and how to overcome those barriers

Organisations are making huge investments in AI, but most AI pilots don't make it into production. Recent studies show that only around 5% of pilots have delivered measurable value, and the gap isn't explained by model quality. It comes down to approach.

Just as designing well for agents means understanding the environment they operate in, implementing AI successfully means designing the organisational conditions around the people expected to use it.

At IF, we're working across a range of organisations and industries to identify the patterns behind why pilots stall, so we can bring that understanding to our partners.

Supporting more cities to realise the potential of AI-enabled services

We’re excited to continue working with The Bloomberg Center for Government Excellence at John Hopkins University as a technical partner for their City Data Alliance program. 

In the next few months, we will support cities across the Americas using data and AI to improve the lives of their residents, without compromising the trust of their colleagues and citizens. 

We’ll be at Bloomberg CityLab with Bloomberg Philanthropies later this month, joining mayors and urban leaders from around the world to discuss how they’re shaping the future of cities.

What we’ve been reading

Human-in-the-loop has become the de facto way for organisations to keep people part of the AI systems they're implementing. But presence in a process isn't the same as being designed for. Marie Claire Dean writes clearly on where human-in-the-loop comes from, and what it would actually take to design well for the people within it.

Institutional AI vs Individual AI: We really enjoyed reading this perspective from George Sivulka, who argues that while AI has made individuals significantly more productive, organisations haven't become more valuable as a result, because we've swapped the motor without redesigning the factory. The real gains will only come when technology and institutional structure are rebuilt together.

We also tried something new at IF this month, holding a debate club to discuss the topics raised in this piece from Slow AI, which argues that humanities are the last real defence against our eroding ability to evaluate whether AI outputs are actually worth anything. Are literature, philosophy and history the skills that will save our critical thinking? Or is that a position only available to people who already had access to them?

Final thought

We're in a moment where design is being pushed into genuinely new territory. At the interaction level, in how we design for people and agents together, and at the institutional level, in how organisations adopt and sustain new technologies. 

If you're working on either of these challenges or grappling with them inside your organisation, we'd love to talk!

— Gemma and the IF Team

This month’s edition was written by Gemma Lord, Director at IF, whose work explores how human-centred design can shape people's experience of a changing world.

Find us on LinkedIn and Medium for more insights and updates.