Daniel Hulme on Safe & Responsible AI…
"Creating safe and responsible AI is one of humanity's most important challenges."
Daniel Hulme
Reframing AI: Insights from Daniel Hulme at Brighton AI
At our recent Brighton AI event, we were joined by Daniel Hulme, CEO of Satalia and Chief AI Officer at WPP, for a wide-ranging and thought-provoking talk exploring the real value, risks, and future potential of AI. With 25+ years in the field - from academic research to enterprise-scale deployments - Daniel offered a critical reframe on how organisations should be thinking about artificial intelligence.
Moving Beyond Insight: Focus on Decisions
Daniel opened with a key challenge: most organisations don’t have insight problems, they have decision-making problems.
Simply surfacing better data isn’t enough to improve outcomes. Human decision-making, especially in complex contexts, is often flawed. His advice? Avoid giving decision tasks to humans when they involve more than seven variables - algorithms are better suited to scale and optimise these challenges.
Defining Real AI: Adaptive, Not Just Automated
Daniel made a clear distinction between automation and true AI:
Automation handles repeatable tasks and decisions; it’s efficient, but not intelligent.
True AI involves goal-directed, adaptive behaviour—systems that learn from outcomes and improve over time.
Most AI in production today, he argued, is actually just automation. Adaptive systems are far harder to build but represent the true paradigm shift.
A Practical Framework for AI Adoption
Daniel outlined six categories of AI application that can be mapped across any industry or supply chain:
Task Automation – Rules-based systems that replace repetitive human tasks.
Content Generation – Especially brand-specific, production-grade content using custom-trained models.
Human Representation – “Audience brains” simulate how different groups perceive content.
Insight & Explanation – Machine learning used not just to predict outcomes, but to explain them.
Decision-Making at Scale – Algorithms that optimise complex allocation problems (e.g., logistics, workforce planning).
Human Augmentation – Digital twins trained on personal work data to improve team fit and productivity.
AI Risk and Governance
Daniel encouraged a more nuanced view of AI ethics and safety. He categorised risk into three areas:
Micro risks: Safe deployment, including explainability and unintended consequences of goal-driven systems.
Malicious risks: Misuse by bad actors.
Macro risks: Broader societal impacts such as misinformation, surveillance, and economic disruption.
A key message: we must consider not only what happens when AI fails—but also what happens when it succeeds too well, optimising a narrow goal at the cost of wider harm.
Towards a Future of Abundance
Looking ahead, Daniel described multiple “singularities” we face in the next decade - from misinformation to automation, healthcare, law, and climate. He argued that AI, if applied thoughtfully, could unlock a world of abundance - where access to essentials like food, education, and energy is dramatically increased and made more equitable.
He also addressed growing concerns around job displacement. In the short term, he believes AI will augment rather than replace most roles. But over the longer term, society must prepare for significant transformation. Leadership, data access, and aligned values will be critical in navigating this change.
Final Reflections
Daniel closed with a reminder that technology alone is not the differentiator. Organisations that will thrive are those with:
Differentiated data
Skilled AI talent
Committed leadership that understands and embraces transformation
He urged companies not only to pursue profitability, but to define and act on a clear purpose. Talent, consumers, and partners will increasingly be drawn to organisations that align commercial success with societal value.