- IF's newsletter
- Posts
- #12 Building confidence in the AI age
#12 Building confidence in the AI age
Everywhere we go, we hear a version of the same sentence: “Everyone wants to use AI, but no one feels ready.”
It’s a fair observation. The tools are advancing faster than the practices, guardrails, and skills that support them. Yet the biggest risk isn’t moving too fast; it’s waiting too long to learn how to work with them safely.
Every organisation now faces a choice: experiment carefully, or fall behind the curve set by those who do. The sooner teams start learning in controlled, deliberate ways, the faster they’ll build the confidence and judgement that real adoption requires.
That confidence doesn’t come from hype or training courses. It comes from seeing what good looks like, testing ideas in safe environments, and knowing where the boundaries are.
Once your team understands how an AI system works, its purpose, its limits, and its points of failure, they’re more likely to use it responsibly. They ask better questions. They catch issues earlier. They build systems that earn trust.

Don’t wait to jump: build your own parachute
For 10 years, Projects by IF has been helping large organisations build customer-facing services that scale safely, earn trust from the start and deliver long-term impact.
We prototype, test, and launch AI products and services that people believe in and want to adopt, while helping organisations change the way they work in the AI age.
Learn more or talk to us.
What we’ve been up to in the studio
Scaling AI in cities
We’ve just wrapped up the prototyping phase of a large programme of work with Bloomberg Philanthropies to support multiple cities around the world to adopt AI responsibly. Dozens of city leaders have been prototyping AI-enabled services, from climate resilience to transport, and planning the next stage: scaling what works safely. We’re supporting them to set up the right strategic enablers, like data infrastructure and evaluation frameworks, that make AI adoption sustainable.
Shaping responsible AI strategies in government
We’re working with a big UK central government department on a new AI strategy that balances innovation with accountability. The focus is on designing feedback loops, not just frameworks, so teams can learn in real time as systems evolve.
…and more, which we hope to be able to talk about soon.
What we’ve been reading
How UX research shapes AI evaluation
A thoughtful piece from Microsoft’s UX Research team on how human perception influences how we judge AI performance. The article shows how even well-designed evaluation frameworks can miss the subtleties of what “good” looks and feels like to people. It’s a reminder that evaluation isn’t only about metrics or benchmarks, but about aligning with human sense-making. Read the article
AI gun detection gone wrong
The BBC reports how an AI-powered security system misidentified a packet of crisps as a firearm, triggering an armed police response. It is a sharp reminder that AI can create new kinds of risk when its outputs are trusted without enough context or oversight. Read the article.
Even “temperature zero” isn’t predictable
A new paper and podcast discussion reveal that even when an AI model is set to be fully consistent, you might still get small differences each time. That is because of how providers process and batch requests behind the scenes. It is a helpful reminder that AI systems are rarely fully stable, and that testing how they behave in real conditions matters as much as setting the right parameters.
ChatGPT Atlas
Simon Willison’s explainer on OpenAI’s new ChatGPT Atlas tool offers a glimpse into how structured evaluation could evolve, aiming to make AI outputs easier to inspect and connect to their evidence base. Within 24 hours of launch, though, it suffered a prompt injection attack, and privacy concerns quickly followed. It’s a revealing case study in how transparency tools can create new kinds of exposure, showing that accountability in AI is as much about containment as visibility. Read more.
Leading in a relational world
In IF’s latest post, I explore why progress with AI and data so often depends less on the technology itself and more on the relationships around it. Building trustworthy systems means leading with purpose, clarity, and connection. Read the blog post.
Coming up
📅 An invitation to join us: How to compete in the AI age without breaking trust
This November in London, we’ll be joined by leaders from design, policy, and AI delivery to discuss how to avoid the fail-fast trap and build systems people believe in. We have panellists from DSIT, Experian, Monzo, and DeepMind, chaired by Richard Allan.
We’ll discuss what it really takes to scale AI responsibly, where the trust barriers lie, and how leaders are building momentum inside complex organisations. Expect practical insight, open discussion, and a chance to connect with peers tackling the same challenges. Register your interest to join us!
Until then, thanks for reading.
– Sarah & the IF team