- IF's newsletter
- Posts
- #15 Better consent, shared responsibility
#15 Better consent, shared responsibility
When technology breaks consent, who carries the responsibility?
Where does responsibility lie for protecting people from online and AI-related harms? Recent events suggest there's no simple answer.
The outrage over X facilitating the widespread creation of abusive imagery highlights a provider taking limited responsibility for the risks created by its service. In the UK, this triggered an Ofcom investigation and the prospect of a ban, but whatever happens next won’t undo the harm already done. The pressing issue is how different parties can share responsibility to prevent similar harms.
Similar questions surround the growing pressure for governments to ban social media for under-16s. Concerns about harm are long-standing, yet past interventions have struggled to address issues. It’s unclear how far age-based bans and restrictions can go alone, or how they might need complementing by new approaches to designing these services.
Another area of concern is people turning to chatbots during moments of mental health crisis. Following the lives lost, lawsuits against providers are ongoing or being settled. These cases point to a broader failure to adequately protect vulnerable users, and to unresolved questions about where responsibility ultimately lies.
At IF, we believe designing services to be responsible, transparent, and participatory can help address these challenges. Giving people more control over consent needs to sit alongside appropriate responsibility for service providers. This approach isn’t just safer for users, but can also become a genuine competitive advantage for service providers.
I recently introduced three new patterns for consentful data sharing in our design catalogue. One - Remind people of their choices - could play an important role in making social media, especially for new users, safer and more consentful by design.
While liability frameworks for AI are still developing and the law catches up, product and service teams have the tools to act now. Better consent design is one of them.

Everyone's responsible, but someone needs to act
For 10 years, IF has been helping large organisations build customer-facing services that scale safely, earn trust from the start and deliver long-term impact. We prototype, test, and launch AI products and services that people believe in and want to adopt, while helping organisations change the way they work in the AI age.
Learn more or talk to us.
What we’ve been reading
Location sharing with your besties has become a way to show closeness and create "ambient intimacy". But it raises questions about privacy and boundaries. Read the article.
An app has gone viral in China that alerts a contact if users living alone don't check in. It shows how technology can manage isolation - but at what cost? Read the article.
AI‑altered images falsely claiming to identify an officer spread online after a federal agent shot a woman in Minnesota: an example of how AI can fuel misinformation in real-time. Read the article.
I'd love to hear how you're using the new patterns for consentful experiences in your work - let us know in the comments.
Until next time,
— Rob and the IF Team