- IF's newsletter
- Posts
- #13 Moving fast, safely
#13 Moving fast, safely
What it takes to build the system of feedback, accountability and human-AI partnership that makes speed safe
This month, we hosted our panel event Crossing the AI Divide. We brought together leaders wrestling with the same question: why, after all the hype, are so few organisations actually getting value from AI?
Shane Wilson’s recent piece on the GenAI divide captured this frustration well: most pilots aren’t moving the needle, and business impact remains thin.
But to me, this points to a deeper issue. The real divide isn’t between “AI-leaders” and “AI-laggards”. It’s between the organisations that treat AI as a feature, and those that understand it fundamentally reshapes how decisions are made, how roles shift, and how trust is earned.
That’s why I opened my talk with a murmuration of starlings - hundreds of birds moving fast, fluidly, without colliding. What appears to be chaos is actually shared awareness, continuous feedback, and coordinated adjustment.
This is what moving fast safely looks like in nature.
And it’s exactly what’s missing in most AI deployment today.
Which is what we explored in the session: why the AI adoption gap is not a technology problem, but a trust and organisational design problem, and what it takes to build the systems of feedback, accountability and human-AI partnership that make speed safe.
This is the work we’ve been doing for a decade. From early explorations into on-device agentic AI with Google in 2017, to helping Blue Cross Blue Shield scale an AI-supported care model across states, to our open catalogue of responsible design patterns used by teams around the world.

What moving fast safely looks like in nature
For 10 years, IF has been helping large organisations build customer-facing services that scale safely, earn trust from the start and deliver long-term impact. We prototype, test, and launch AI products and services that people believe in and want to adopt, while helping organisations change the way they work in the AI age.
Learn more or talk to us.
What we’ve been reading
This month’s reading list connects directly to the themes we discussed on stage: trust, accountability, and the real-world impact of AI when it’s used without enough context, signals or oversight.
BBC: User-centred AI labels. A refreshing alternative to the “AI-inside!” labelling that treats AI as a value in itself. The BBC explores how to normalise, not fetishise, AI, grounding disclosures in what people need to know to make sense of a system, not what the technology hype cycle demands. It aligns closely with our view that transparency should be contextual, not performative.
HMRC’s child benefit crackdown Highlights how easily trust erodes when decisions that affect families are made through opaque, data-driven processes with limited explanation or recourse, a pattern we see repeatedly in AI deployment too.
A piece by our friend Imogen Parker on AI transcription tools, and alongside that, Rachel Coldicutt’s critique of AI notetakers. Taken together, these pieces show both the promise and the pitfalls of AI-mediated work. Transcription is often presented as a “low-risk” use case, but as Rachel notes, it quietly rewires meetings, power dynamics, memory, and trust. This is exactly why AI isn’t a neutral feature. It changes organisations from the inside out.
Matt’s design principles for his first year at Miro. They resonate strongly with “moving fast safely”, especially the focus on showing the “chain of thought” of AI. Not just showing outputs, but the signals and reasoning that help people stay oriented and confident. Obviously, I also loved the Star Trek references.
Final thought
If there’s one thing I took from our event, and from everything we’ve been reading, it’s this:
Speed is only an advantage if you can see where you’re going. Starlings move fast because they’re connected, aware, and constantly learning.
Organisations at the forefront of AI adoption are discovering the same thing.
Thanks for reading - and as ever, if something here sparks a thought, reply and let us know.
Until next time,
— Sarah and the IF Team