Dilemmas and Dangers in AI
How YOU can make a difference
Help tackle the biggest risks and opportunities posed artificial intelligence
Interdisciplinary online fellowship helping you explore how to steer cutting-edge AI technology toward benefits for humanity.
Dates: 19 January to 28 February
Who: Smart, curious, and ambitiously altruistic students aged 15-19 who haven’t yet started university
Cost: Choice of tiers based on ability to pay (i.e. free if needed!)
Top participants receive custom mentorship, referrals to selective next-steps, access to funding, and ongoing support!
What you can expect
Facilitated discussion groups with a cohort of smart, curious students
Talks and Q&As with professionals addressing existential risks from AI
Mentorship and referrals to pursue follow-up projects
Support with UCAS applications and Oxbridge interviews
97% of participants would recommend!
Surveyed at the end of our summer cohort, 97% of our 300+ Finalists ranked 7/10 or above on their likelihood to recommend this course; 35% said 10/10!
Plus, nearly 90% reported feeling more confident in their ability to make a positive impact after their course!
Course overview
-
- Why AI might be the next industrial revolution
- The path to ChatGPT, and where it leads now
- Existing and emerging risks of today’s AI landscape
Example resources:
🎧 Blog readout by Holden Karnofsky: “This Can't Go On”
🌐 Interactive website by Epoch AI: AI Trends
🎙️ Video interview by the Institute of Art and Ideas: “Immortality in the future”
-
Foundations of AI agents
Could AI cause extinction this century? Where the experts stand
The real Black Mirror: AI-enhanced dystopia
🎦 YouTube video by Rational Animations: “Specification Gaming: How AI Can Turn Your Wishes Against You”
📺 Documentary on Netflix: Coded Bias
📚 Book by Brian Christian: The Alignment Problem
-
How and why AI lies to us
Bias and ethics in current AI systems
How we accidentally train AI to overpower us
📝 Writeup by the Center for AI Safety: “An Overview of Catastrophic AI Risks “
🎥 YouTube video by 80,000 Hours: “Could AI wipe out humanity? “
💬 Talk by Max Daniel: “S-risks: why they are the worst existential risks, and how to prevent them”
-
Who wields AI power and how?
Promises of and challenges to alignment interventions
Governance of AI and analogies from history
Example resources:
📝 Blog post BlueDot Impact: “A Brief Introduction to some Approaches to AI Alignment”
🔍 Cause profile by 80,000 Hours: “We can tackle these risks” in “Preventing an AI-related catastrophe”
💬 Talk by Michael Aird: “AI Governance: overview and careers”
-
Is this a problem you should prioritize?
How to help tackle AI risks without even learning to code
Degrees that set you up to tackle AI risks
Example resources:
⏸️ Blog post by Scott Alexander: “Pause For Thought: The AI Pause Debate”
📝 Article by Probably Good: “The SELF Framework” — a simple tool to help you assess a role’s potential for good
🌐 Tag on the EA Opportunities Board: AI Safety & Policy
Weekly structure
-
Work through weekly interactive videos, questions, articles, and other activities to prepare for the week’s exploration sheet and discussion call.
-
Weekly discussion with a Leaf facilitator and small-group breakouts to develop your critical thinking in conversation with intelligent, interesting, like-minded teens.
We’ll do our best to find a slot that works around your other commitments!
-
From predicting trends in AI systems & policies to jailbreaking AI systems yourself (can you get it to reveal it's secret password?) to getting started on coding, you’ll explore practical skills to help you tackle the dangers of AI and explore your opinions by reflecting on the week’s content for yourself before your discussion call.
-
Meet professionals working at the forefront of tackling existential risks from artificial intelligence.
-
Share your own knowledge plus join sessions run by Fellows and alumni! Meet inspiring peers with shared interests; collaborate on projects; discover exciting new ideas.
-
Discord channel, paired 1:1s, and opportunities to get to know peers with different backgrounds but shared passions.
-
Weekly competitions for prizes! Eg. technical tasks, creative writing, or more extensive activities like Intelligence Rising, an immersive simulation developed by researchers at Cambridge University’s Centre for the Study of Existential Risk.
(Participation is not guaranteed — you may need to sign up or apply internally, and it depends on your time availability.)
2025 Speakers Included:
Isaac Dunn, an Open Philanthropy Century Fellow studying global catastrophic risk and AI alignment
Emma Lawsen, Senior Policy and Operations Strategist at the Centre for Long-Term Resilience
Buck Shlegeris, CEO of AI safety and security research organization Redwood Research
Meet some staff & speakers
Leaf is supported by a wide range of experts, facilitators, and alumni to help support the growth of our Fellows.
Connor Axiotes
Conjecture, ex-Adam Smith Institute & UK Parliament
-
Connor is an AI policy and communications expert. Most recently, he worked as Strategic Communications Lead at Conjecture, a startup building a new AI architecture to ensure the controllable, safe development of advanced AI technology. He was previously Director of Communications & Research Lead for Resilience, Risk and State Capacity at the Adam Smith Institute (a UK think tank). He has worked in communications for Members of Parliament and Rishi Sunak’s Prime Ministerial campaign. He has a master’s in Global Politics from Durham Uni.
Noah Siegel
Google DeepMind
-
Noah is a Research Engineer at Google DeepMind. After his research at the Allen Institute for AI in Seattle, he worked on machine learning for robotics at DeepMind before switching to focus on language model reasoning and explanations as part of AI Safety and alignment research. Having studied computer science, economics, and maths as an undergraduate, he is now pursuing a PhD in Artificial Intelligence at University College London via the UCL-DeepMind joint PhD program.
Jai Patel
University of Cambridge
-
Jai researches AI governance at Cambridge University's Centre for the Future of Intelligence, working on policies to support AI to scale responsibly, and is soon starting full time at the UK government’s AI Safety Institute. He also works part time at EdTech startup Wellio, and previously co-developed CitizenAI, a GPT-wrapper which provides quick, independent, expert advice to address common issues such as employment or housing concerns. His undergrad was in PPE at LSE.
Yi-Ling Liu
Writer, editor & journalist
-
Yi-Ling is writing a book for Penguin Press on the Chinese Internet, was previously the China editor of Rest of World, and wrote freelance for various outlets (New Yorker, WIRED, New York Times Magazine) on Chinese tech, society & politics. She is an affiliate at Concordia AI, which promotes international cooperation on AI safety and governance. Her undergrad was in English and creative writing at Yale University.
Jack Parker
HiddenLayer
-
Jack is a computer security engineer whose speciality is offensive security. His education at Middlebury College and Duke University focused on math, machine learning, software engineering, and education. He is especially interested in solving technical safety and security problems to improve the reliability and trustworthiness of AI systems.
Sam Smith
Course designer and facilitator
-
Sam is the course designer and facilitator of DDAI. They are a Leaf alum and Non-Trivial winner who has facilitated several courses for both organisations. They are studying maths and philosophy at the University of Bristol and have particular interests in applied mathematics and ethics.
The opportunities don’t end after five weeks!
Learn more here about our referrals to top research programs, connection to experts and mentorship, support and funding for projects, and other ongoing Leaf alumni opportunities!
Cambridge AI Safety Hub
Alumni have gone on to work with:
Center for Youth and AI
Alumni Perspectives