Reclaiming Human Agency in
the AI Race

A C-suite forum focussed on real-world agency and shared stewardship — not a technology briefing — led by Dr. Julia Stamm (She Shapes AI) and Women of AI.

In partnership with:

Led by:

  • Dr. Julia Stamm

Founder, She Shapes AI & The Futures Project | Fellow, RSA & Fellow, Hertie School Centre for Digital Governance | Former European Commission & G20 Advisor

Time, date & format:

  • Thursday, 18th June

  • 2 PM - 3PM GMT

  • Online & Interactive Networking Event on Zoom.

Digital agenda:

  • 2:00 pm Intro & Welcome

  • 2:05 pm Julia's Keynote

  • 2:30 Breakout rooms

  • 2:45 pm Q&A

  • 2:55 pm Close

Leading Organisations Out of
the Certainty Trap

We’re racing towards an AI future. But without direction and, increasingly, without trust. AI investment is exploding, new models drop constantly, tech companies are pushing ahead with confidence, and yet half the world doesn't trust the technology being built on its behalf.

Dr. Julia Stamm breaks down why certainty is the problem, not the solution, and offers a concrete framework to reclaim our agency to shape technology, and a future, that actually serves humanity.

This executive forum brings together senior leaders to step back from the race, ask tough questions, and explore what genuinely purposeful AI leadership looks like.

Grounded in Julia's TRAP framework (Trust, Relevance, Attention, Potential), this session moves beyond the hype to focus on the leadership skills that AI adoption actually demands: human judgment, collaborative decision-making, and the courage to admit uncertainty.

Who is this for?

◆ CEOs, CFOs, CIOs, COOs, and board-level directors

◆ Senior leaders in technology, financial services, professional services, and the public sector

◆ Leaders responsible for AI strategy, governance, and organisational transformation

Laura Tatton, AKA The AI Lady, PR expert and Director at ConsuLT PR & Marketing

Julia will address the following topics:

The Certainty Trap: Why performing confidence in the face of the unknowable is one of the biggest risks facing organisations today — and what genuinely trustworthy AI leadership looks like instead.

The TRAP Framework: A practical lens for navigating AI uncertainty — Trust, Relevance, Attention, Potential — and how leaders can use it to cut through noise and make decisions grounded in human values.

Who Gets to Decide? A very small group of people is currently writing the future for everyone else. This session examines who's in the room, who's missing, and what genuine shared stewardship looks like in practice.

Risk Awareness as Leadership: Women consistently show more hesitation around AI — not risk aversion, but risk awareness. This session reframes that as exactly the leadership quality organisations need right now.


A Global, Credible Voice on AI

Dr Julia Stamm is one of the world’s most credible voices on responsible AI. She’s spent nearly 20 years working at the intersection of innovation, technology, research, business, policy making, and civil society to ensure innovation and technology meet societal needs

From shaping EU research frameworks at the European Commission to building the G20 Global Solutions Initiative and growing data-for-good organisations, Julia brings rare institutional authority and practitioner grit to every stage she graces.

As Founder & CEO of She Shapes AI, Julia spearheads the leading catalyst for responsible AI innovation by women globally. Their unique flagship awards programme draws applications from all continents and demonstrates what AI done well looks like in practice.

A TEDx speaker, RSA Fellow and Digital Female Leader Award winner, Julia inspires audiences to think differently about AI, and she gives them the frameworks to act.

RSVP TO SECURE A PLACE

This event is complimentary to attend and is limited to 20 spaces. Register now to secure your place.

Days

Hours

Minutes

Seconds

What you'll gain:

◆ A clear framework for cutting through AI hype — the TRAP model gives senior leaders a structured lens for evaluating AI decisions grounded in trust and human values, rather than vendor narratives.

◆ Insight into the difference between AI certainty and AI leadership — and why organisations that reward the former are setting themselves up for failure.

◆ A reframed understanding of risk awareness as a competitive advantage, and practical strategies for bringing that perspective to the centre of your AI governance.

◆ Actionable thinking on shared stewardship — what it means to take collective responsibility for AI outcomes, and how to begin embedding that culture inside your organisation.

◆ Opportunities for peer-level dialogue with senior leaders across industries facing identical challenges — in a setting small enough for candid, unfiltered conversation.

◆ A personal and organisational framework — Julia's PART model (Purpose, Agency, Responsibility, Trust) — to take back into the room and put to work immediately.