- IF's newsletter
- Posts
- #7 Can we open the black box?
#7 Can we open the black box?

Welcome to our next newsletter. It’s flown by!
The UK has been rocked by cyber attacks this month, with large scale disruption caused to national retailers and government agencies.
These organisations have struggled to recover, with some services down for weeks (or months? We’ll see). And across the globe, the theft of sensitive personal data has become a sobering new norm.
It’s a hard pill to swallow. Digital government and online retail is part of everyday life, and we assume that information we share with trusted brands is safe inside modern, high-tech black boxes. But it’s increasingly obvious that many services are very vulnerable, and not always worthy of our trust.

What will we find in the black box?
The best organisations respond with urgency, competence, and transparency. But even the most energetic response can’t undo years of underinvestment in technology infrastructure.
Take 23andMe, where breaches exposed the genetic data of millions. That’s information that can’t simply be changed like a password. This isn't just about reputational damage or public trust. These are deep, irreversible harms that raise urgent questions the industry still isn’t prepared to answer.
We are Projects by IF. We help our clients move faster, with less risk, by creating products and services that earn and maintain trust. We help our clients do 3 things:
- Grow in new markets.
- Deepen customer relationships.
- Derisk innovation.
Learn more or talk to us.
What we’ve been up to in the studio
Reporting on Digital Identity in the UK
Our recent work (with Oliver Wyman and Perspective Economics) for DSIT has been published: Lifting the lid on the UK digital identity ecosystem: Digital Identity Sectoral Analysis 2025. The report shows that tension between convenience and concern for privacy and security continues to be an issue for the public. Perhaps unsurprising given the number of high profile cyberattacks in 2025 alone. The government has a role to play in holding Digital ID providers (such as banks) to high standards.
Navigating responsible AI practises
We’re deep in some work for a UK public organisation, establishing their responsible AI delivery practises. In parallel with becoming AI-enabled they’re acting quickly to set standards for explainability, public benefit, human oversight, and many other topics. Our job is to help them define HOW to do it, in a way that’s approachable and that sticks. Across government, the persistent challenge is to build capability in technology, design, and organisational activity.
And here’s Sarah’s take on the missing ROI of GenAI adoption. We’re seeing an AI rush that looks like vague pilots and shallow services – and they flounder. To drive lasting results, the industry needs to double our efforts to solve clear needs and whole problems.
Towards more respectful advertising practises
Lastly, our colleague Peter has been advocating for Meta to adopt a different approach to behavioural advertising. See the press release or the full report: Privacy without paying.
What we’ve been reading
We really liked this piece from the United Nations Development Programme on orientating AI systems toward public good:
‘We don’t see responsible AI as a checklist. We see it as a public capability, built over time, in context, and through application. It can’t be done by frameworks alone or outsourced entirely. It must be practiced, questioned, owned, and continually adapted by the institutions that use it’
And this month we’ve been digging into fairness in AI systems. What helps us to prove that our technology is behaving fairly? There are partial technical answers, like Algorithmic Transparency, but Joe Tomlinson from the Public Policy Design team in the UK government makes the point that demonstrating fair process is fundamental. And this has a big impact on human behaviour: ‘Taxpayers who believed that they were treated fairly and respectfully by tax authorities were more likely to perceive the tax system as legitimate.’ As Joe says, ‘process design must be part of policymaking’.
Meanwhile, a fascinating piece of research which shows that differences in LLM system prompts, like asking for short answers, greatly increases the chance of hallucination. There’s still so much to learn to be able to use this new technology confidently.
Until next time,
— The IF Team