Dilemmas and Dangers in AI
How YOU can make a difference
Help tackle the biggest risks from artificial intelligence
Free 5-week interdisciplinary online fellowship helping you master the skills needed to steer cutting-edge AI technology toward benefits for humanity.
Dates: Next cohorts expected late June - early August 2025
Who: Smart, curious, and ambitiously altruistic students aged 15-19 studying any subjects in the UK or Ireland who haven’t started university
Applications: You can apply early or sign up for our mailing list for updates, at the bottom of our home page.
What you can expect
Oxbridge-style learning with tutorials and a cohort of smart, curious students
Talks and Q&As with professionals addressing existential risks from AI
Mentorship and up to £1,000 in grant funding for alumni to pursue follow-up projects
Support with UCAS applications and Oxbridge interviews
9/10 Fellows would recommend
Surveyed at the end of our first cohort, Fellows gave an average of 9.0 when asked to rate from 1 to 10 how likely they would be to recommend the Fellowship to a friend interested in having a big impact.
Plus, 83% of Fellows feel both more confident in their ability to make a positive impact and more ready to take ambitious actions as a result of participating.
Fellowship overview
-
- Why AI might be the next industrial revolution
- The path to ChatGPT, and where it leads now
- Transhumanism & music we lack the ears to hear
Example resources:
🎧 Blog readout by Holden Karnofsky: “This Can't Go On”
🌐 Interactive website by Epoch AI: AI Trends
🎙️ Video interview by the Institute of Art and Ideas: “Immortality in the future”
-
How and why AI lies to us
Bias and ethics in current AI systems
How we accidentally train AI to overpower us
🎦 YouTube video by Rational Animations: “Specification Gaming: How AI Can Turn Your Wishes Against You”
📺 Documentary on Netflix: Coded Bias
📚 Book by Brian Christian: The Alignment Problem
-
Can’t we just unplug dangerous AI?
Could AI cause extinction this century? Where the experts stand
The real Black Mirror: AI-enhanced dystopia
📝 Writeup by the Center for AI Safety: “An Overview of Catastrophic AI Risks “
🎥 YouTube video by 80,000 Hours: “Could AI wipe out humanity? “
💬 Talk by Max Daniel: “S-risks: why they are the worst existential risks, and how to prevent them”
-
- Diving into the black box: Interpreting AI
- Technical solutions: empirical and theoretical approaches
- Competitions that boost your AI skills
Example resources:
📝 Blog post BlueDot Impact: “A Brief Introduction to some Approaches to AI Alignment”
🔍 Cause profile by 80,000 Hours: “We can tackle these risks” in “Preventing an AI-related catastrophe”
📝 Article by Probably Good: “The SELF Framework” — a simple tool to help you assess a role’s potential for good
-
- Filling the void: Introducing (inter)national AI rules and regulations
- How to help tackle AI risks without even learning to code
- Degrees that set you up to tackle AI risks
Example resources:
💬 Talk by Michael Aird: “AI Governance: overview and careers”
⏸️ Blog post by Scott Alexander: “Pause For Thought: The AI Pause Debate”
🌐 Tag on the EA Opportunities Board: AI Safety & Policy
Weekly structure
-
Neatly summarised introduction to the key risks and impact opportunities of artificial intelligence, plus curated interactive videos, engaging quizzes, and other activities.
-
Weekly check-in and discussion with an advisor and 3-5 peers; develop your critical thinking in conversation with intelligent, interesting, like-minded teens.
We’ll do our best to find a slot that works around your other commitments!
-
From predicting trends in AI systems & policies to stress-testing AI systems yourself (can you get it to reveal it's secret password?) to getting started on coding, you’ll explore practical skills to help you tackle the dangers of AI.
-
Meet professionals working at the forefront of tackling existential risks from artificial intelligence.
-
Share your own knowledge plus join sessions run by Fellows and alumni! Meet inspiring peers with shared interests; collaborate on projects; discover exciting new ideas.
-
Discord channel, paired 1:1s, and opportunities to get to know peers with different backgrounds but shared passions.
-
Take on the role of a politician or a CEO as you navigate and shape AI developments. Compete and collaborate with other players in Intelligence Rising, an immersive simulation developed by researchers at Cambridge University’s Centre for the Study of Existential Risk.
(Participation is not guaranteed — you may need to sign up or apply internally, and it depends on your time availability.)
Meet the speakers & staff
Leaf is supported by a wide range of experts, facilitators, and alumni to help support the growth of our Fellows.
Connor Axiotes
Conjecture, ex-Adam Smith Institute & UK Parliament
-
Connor is an AI policy and communications expert. Most recently, he worked as Strategic Communications Lead at Conjecture, a startup building a new AI architecture to ensure the controllable, safe development of advanced AI technology. He was previously Director of Communications & Research Lead for Resilience, Risk and State Capacity at the Adam Smith Institute (a UK think tank). He has worked in communications for Members of Parliament and Rishi Sunak’s Prime Ministerial campaign. He has a master’s in Global Politics from Durham Uni.
Noah Siegel
Google DeepMind
-
Noah is a Research Engineer at Google DeepMind. After his research at the Allen Institute for AI in Seattle, he worked on machine learning for robotics at DeepMind before switching to focus on language model reasoning and explanations as part of AI Safety and alignment research. Having studied computer science, economics, and maths as an undergraduate, he is now pursuing a PhD in Artificial Intelligence at University College London via the UCL-DeepMind joint PhD program.
Jai Patel
University of Cambridge
-
Jai researches AI governance at Cambridge University's Centre for the Future of Intelligence, working on policies to support AI to scale responsibly, and is soon starting full time at the UK government’s AI Safety Institute. He also works part time at EdTech startup Wellio, and previously co-developed CitizenAI, a GPT-wrapper which provides quick, independent, expert advice to address common issues such as employment or housing concerns. His undergrad was in PPE at LSE.
Yi-Ling Liu
Writer, editor & journalist
-
Yi-Ling is writing a book for Penguin Press on the Chinese Internet, was previously the China editor of Rest of World, and wrote freelance for various outlets (New Yorker, WIRED, New York Times Magazine) on Chinese tech, society & politics. She is an affiliate at Concordia AI, which promotes international cooperation on AI safety and governance. Her undergrad was in English and creative writing at Yale University.
Jack Parker
HiddenLayer
-
Jack is a computer security engineer whose speciality is offensive security. His education at Middlebury College and Duke University focused on math, machine learning, software engineering, and education. He is especially interested in solving technical safety and security problems to improve the reliability and trustworthiness of AI systems.
Jamie Harris
Leaf
-
After graduating with a first-class degree in history from the University of Oxford, Jamie taught history for several years at a Sixth Form college. He then joined the think tank Sentience Institute where he researched the moral consideration of AI entities, the history of AI rights research, and which psychological factors influence moral concern for AIs. He has co-founded and led multiple nonprofits providing impact-focused advice.
The opportunities don’t end after 5 weeks
Alumni from our recent Fellowships are being supported to pursue projects like:
Research projects into topics they were fascinated by during the course content focused on various risks from AI and opportunities to tackle them
A more in-depth investigation of the risks from AI through a Leaf-specific cohort of BlueDot Impact’s AI Safety Fundamentals courses
A podcast investigating effective altruism — what it is, how it can be applied, and what its limitations are
Plus taking follow-up opportunities* offered by Leaf like:
Weekly accountability calls to create a long-term career plan
Virtual work experience with a high-impact nonprofit
Up to £1,000 in grant funding plus mentorship for projects
*These follow-ups are applied for or earned, not guaranteed!
Plus, you’ll still have access to the online learning platform, the Discord channel, and your new friends after the 5 weeks end.
Our partner for follow-up opportunities:
Cambridge AI Safety Hub
How our first cohort helped
Where Leaf alumni are now
Oxford University
Cambridge University
Harvard University
London School of Economics
Secure a place in 15 minutes
The first stage of your application is a short written form (15 minutes). We mostly ask for basic info about you to check eligibility. When you complete this, you’ll immediately be sent a personalised link to our fun but challenging puzzle quiz with rapid-fire, brain teaser questions (20 minutes). Both parts are due by 7th July.
We’ll get back to you within two weeks (21st July or before), either inviting you to participate as an Independent Learner with access to our platform for self-paced exploration, or proceeding with your application for the full fellowship.
If you are invited to the final stage (30 minutes), you’ll answer a small number of thought-provoking, open-ended questions relevant to the topics of the programme, plus some multiple choice personality quiz questions. (There’s no interview!)
More questions? See our “FAQ” page.