Episode #38 “France vs. AGI” For Humanity: An AI Risk Podcast



In Episode #38, host John Sherman talks with Maxime Fournes, Founder, Pause AI France. With the third AI “Safety” Summit coming up in Paris in February 2025, we examine France’s role in AI safety, revealing France to be among the very worst when it comes to taking AI risk seriously. How deep is madman Yan Lecun’s influence in French society and government? And would France even join an international treaty? The conversation covers the potential for international treaties on AI safety, the psychological factors influencing public perception, and the power dynamics shaping AI’s future.

Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast

EMAIL JOHN: [email protected]

This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22

For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures

RESOURCES:

SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates

BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk

Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

TIMESTAMPS:
**Concerns about AI Risks in France (00:00:00)**
**Optimism in AI Solutions (00:01:15)**
**Introduction to the Episode (00:01:51)**
**Max Wingo’s Powerful Clip (00:02:29)**
**AI Safety Summit Context (00:04:20)**
**Personal Journey into AI Safety (00:07:02)**
**Deep Learning Revolution (00:08:24)**
**Concerns about AI Progress (00:10:12)**
**Autonomous Agents as a Concern (00:13:07)**
**Misuse of AI Technology (00:18:21)**
**Commitment to AI Risk Work (00:21:33)**
**Impact of Efforts (00:21:54)**
**Existential Risks and Choices (00:22:12)**
**Underestimating Impact (00:22:25)**
**Weak Counterarguments (00:23:14)**
**Existential Dread Theory (00:23:56)**
**Global Awareness of AI Risks (00:24:16)**
**France’s AI Leadership Role (00:25:09)**
**AI Policy in France (00:26:17)**
**Influential Figures in AI (00:27:16)**
**EU Regulation Sabotage (00:28:18)**
**Committee’s Risk Perception (00:30:24)**
**Concerns about France’s AI Development (00:32:03)**
**International AI Treaties (00:32:36)**
**Sabotaging AI Safety Summit (00:33:26)**
**Quality of France’s AI Report (00:34:19)**
**Position of AI Leaders (00:42:38)**
**The Role of Onboarding in Activism (00:43:46)**
**Challenges of Volunteer Management (00:46:54)**
**Encouraging Active Participation (00:48:03)**
**Addressing Public Perception of AI Risks (00:49:14)**
**Optimism Amidst Despair (00:49:14)**
**Mobilizing Public Awareness (00:51:16)**
**The Power of Individual Action (00:52:06)**
**The Domino Effect of Recruitment (00:52:38)**
**Social Norms and AI Risk Discussions (00:53:17)**
**Rage and Public Reaction to AI Risks (00:55:07)**
**Personal Reflections on Future Generations (00:56:15)**
**The Responsibility of Current Generations (00:57:22)**
**Debating AI’s Potential Benefits (00:58:00)**
**Scenarios Following AGI Development (00:59:18)**
**Concerns Over Rapid Technological Advancement (01:01:03)**
**Speculative Nature of AI Risks (01:05:33)**
**Compartmentalization of the Internet (01:06:14)**
**Vision of Destruction (01:06:50)**
**AI Technology and Human Control (01:07:10)**
**Worries about Future AI Models (01:08:00)**
**Scaling Laws in AI Development (01:10:07)**
**Skepticism Towards Predictions (01:12:18)**
**Hope for Change in AI Management (01:13:24)**
**Call to Action (01:14:04)**
**Existential Reflection (01:15:06)**
**Positive Mindset for the Future (01:16:10)**
**Celebration of Life (01:16:40)**
**Final Thoughts on AI Responsibility (01:19:34)**

source

7 thoughts on “Episode #38 “France vs. AGI” For Humanity: An AI Risk Podcast”

  1. If less than 1% of people even know there’s a problem how is this going to be the biggest movement in human history? It’s not like we have a lot of time for everyone to find out at their convenience.

    Reply
  2. Thanks for the shout-out John! That was a fun surprise 🙂

    For anyone interested, I'll be putting out more content in the future discussing the nuances of the AI safety issue in a straightforward and accessible way. I'll be a bit slow in the next month or so as I'm in the process of moving to begin work as a research engineer at Conjecture AI trying to build safe and controllable systems as alternatives to the giant black box models and agents that the frontier labs are recklessly building!

    Reply

Leave a Comment