#9 What changes when you build with AI?

Welcome

This month, I found myself in a room with a dozen people who are all wrestling with the same thing: What really changes when you build with AI,  especially when you can’t afford to get it wrong?

We were hosting a breakfast with product and design leaders from healthcare, finance, and the public sector. Everyone around the table worked in a regulated environment. Everyone was being asked to move fast with generative AI. And everyone had stories about where the old ways of working were starting to crack.

We talked about what’s breaking, what needs redesigning, and where responsibility actually lives when you build with AI - not in theory, but in delivery.

A grid of pixelated black cat silhouettes on a yellow background in a retro style

Delivery gets unpredictable when the old rules don’t apply

Here’s what we heard.

1. Traditional delivery models break under generative AI
Discovery now includes mapping human–AI interactions and the risks they introduce. Delivery needs new rhythms: tighter feedback loops, earlier risk reflection, and space to adapt as both models and users evolve.

2. Human oversight is necessary but costly
Putting a person in the loop adds effort, fatigue, and its own risks. Oversight needs to be designed — not just assumed.

3. Accountability is emotionally charged
People worry about when to trust the model and when they’ll be held responsible if something goes wrong. We need systems that support shared judgment, not just assign blame.

4. Responsible AI is a team capability, not a checklist
It has to show up in delivery, in team roles, and in decision-making rituals, not just in policy. Responsibility is a practice.

We’re seeing this play out in practice on one of our projects. Teams are trying to use generative AI to deliver more with less, but are finding that generative AI isn’t just another tool; it doesn’t slot into existing sprints or fit neatly into current roles.

Some of the shifts it’s prompting:

  • New cross-functional rituals like risk triage and shared validation

  • New responsibilities for product, policy, and engineering teams

  • New questions about when AI is appropriate, not just whether it works

Whether it’s over a roundtable with decision-makers or amongst practitioners on projects, one principle comes up again and again:

“Responsible AI doesn’t live in a handbook. It lives in delivery.”

What we’ve been reading

An NHS AI tool wrongly flagged a patient for diabetic screening and almost sent them down a long, unnecessary treatment path. The story raised important questions about how generative AI gets deployed in high-trust environments, and what happens when tools intended to “support clinicians” become decision-makers by default.
Read in Fortune →

Jess Morley’s take on this is essential reading: she reminds us that this isn’t a new mistake, just a new interface. “We’ve seen this film before,” she writes. “The problem is not the tech — it’s the failure to learn from last time.”
Read Jess’s post →

The FDA is facing backlash over its internal generative AI tool, ELSA, used for drug evaluation.
Staff reported overly confident, hallucinated outputs, including fictional citations, while leadership continued to promote it as a success. A good reminder that internal tools need just as much scrutiny as public-facing ones.
Read in CNN →

Tesla is once again under scrutiny for its Autopilot branding, as California regulators claim the company misled consumers about what the system could safely do.
It’s a reminder that naming things, whether it's Autopilot or Assistant, isn’t neutral. The words we use shape expectations, trust, and responsibility.
Read in the Washington Post →

A research paper on “Imposing Limits” asks what it would look like to design AI systems that deliberately scale back to meet social, environmental, or institutional boundaries.
It’s a useful counterpoint to the “move fast” narrative, especially for teams working in infrastructure, healthcare, or policy.
Read the paper →

Content Design London posted a sharp blog on why generative AI is changing their work — and what they’re doing about it. It’s clear, honest, and refreshingly pragmatic about what’s hype, what’s useful, and what we still don’t know.
Read the post →

The UK government has signed a deal with OpenAI to explore how its models could be used in public services. It’s a headline move, and a moment that could shape public sector AI norms for years to come.

My take:

“This is a big moment, but not a surprising one. In many ways, OpenAI is doing what big vendors have always done - position themselves as infrastructure. What’s changed is the technology, the pace, and the scale of ambition.”

“We urgently need clarity on what we expect from public services when they use third-party Generative AI tools, and what public value looks like in those contexts.”

What’s next

We’ll be hosting our next breakfast in September. If you’d like to join, or want to talk about what responsible AI looks like in your organisation, get in touch.

Until then, thanks for reading.

– Sarah & the IF team

Find us on LinkedIn and Medium for more insights and updates.