Dilemmas and dangers in AI

How YOU can make a difference

Help tackle the biggest risks from artificial intelligence

Free 5-week interdisciplinary online fellowship helping you master the skills needed to steer cutting-edge AI technology toward benefits for humanity.

  • Dates: 24th February to 31st March, 2024 (5 weeks finishing by Easter)

  • Who: Smart, curious, and ambitiously altruistic UK students of any subjects in years 11-13

  • Applications: Apply by Sunday 11th February, 2024

What you can expect

Oxbridge-style learning with tutorials and a cohort of smart, curious students

Talks and Q&As with professionals addressing existential risks from AI

Mentorship and up to £1,000 in grant funding for alumni to pursue follow-up projects

Support with UCAS applications and Oxbridge interviews

8.7/10 Fellow overall satisfaction

Surveyed at the end of our most recent, similar course, Fellows gave an average of 8.7 when asked to rate from 1 to 10 their overall satisfaction with the Fellowship.

Plus, 84% of Fellows feel both more confident in their ability to make a positive impact and more ready to take ambitious actions as a result of participating.

Fellowship overview

    • Why AI might be the next industrial revolution

    • What sci-fi gets right and wrong about AI risk

    • Transhumanism & music we lack the ears to hear

    Example resources:

    🎧 Blog readout by Holden Karnofsky: “This Can't Go On”

    🌐 Interactive website by Epoch AI: AI Trends

    🎙️ Video interview by the Institute of Art and Ideas: “Immortality in the future”

    • How and why AI lies to us

    • Bias and ethics in current AI systems

    • How we accidentally train AI to overpower us

    🎦 YouTube video by Rational Animations: “Specification Gaming: How AI Can Turn Your Wishes Against You”

    📺 Documentary on Netflix: Coded Bias

    📚 Book by Brian Christian: The Alignment Problem

    • Can’t we just unplug dangerous AI?

    • Could AI cause extinction this century? Where the experts stand

    • The real Black Mirror: AI-enhanced dystopia

    📝 Writeup by the Center for AI Safety: “An Overview of Catastrophic AI Risks “

    🎥 YouTube video by 80,000 Hours: “Could AI wipe out humanity? “

    💬 Talk by Max Daniel: “S-risks: why they are the worst existential risks, and how to prevent them”

    • Diving into the black box: Interpreting AI

    • Technical solutions: empirical and theoretical approaches

    • Filling the void: Introducing (inter)national AI rules and regulations

    Example resources:

    📝 Blog post BlueDot Impact: “A Brief Introduction to some Approaches to AI Alignment”

    💬 Talk by Michael Aird: “AI Governance: overview and careers”

    🔍 Cause profile by 80,000 Hours: “We can tackle these risks” in “Preventing an AI-related catastrophe”

    • How to help tackle AI risks without even learning to code

    • Degrees that set you up to tackle AI risks

    • Competitions that boost your AI skills

    Example resources:

    📑 Directory by Leaf: “Different potentially impactful career pathways, organised by degree subjects”

    🌐 Tag on the EA Opportunities Board: AI Safety & Policy

    📝 Article by Probably Good: “The SELF Framework” — a simple tool to help you assess a role’s potential for good

Weekly structure

  • Neatly summarised introduction to the key risks and impact opportunities of artificial intelligence, organised on our online learning platform with interactive videos, engaging quizzes, and other activities.

  • Weekly check-in and discussion with an advisor and 3-5 peers; develop your critical thinking in conversation with intelligent, interesting, like-minded teens.

    We’ll do our best to find a slot that works around your other commitments!

  • From predicting trends in AI systems & policies to stress-testing frontier AI yourself (can you bypass ChatGPT’s safety features?) to getting started on coding, you’ll explore practical skills to help you tackle the dangers of AI.

  • Debate and get rapid feedback on your ideas with an AI advisor trained on the content from this programme.

  • Meet professionals working at the forefront of tackling existential risks from artificial intelligence.

  • Share your own knowledge plus join sessions run by Fellows and alumni! Meet inspiring peers with shared interests; collaborate on projects; discover exciting new ideas.

  • Discord channel, paired 1:1s, and opportunities to get to know peers with different backgrounds but shared passions.

Meet the speakers & staff

Leaf is supported by a wide range of experts, facilitators, and alumni to help support the growth of our Fellows.

Connor Axiotes

Conjecture, ex-Adam Smith Institute & UK Parliament

  • Connor is Strategic Communications Lead at Conjecture, a startup building a new AI architecture to ensure the controllable, safe development of advanced AI technology. He was previously Director of Communications & Research Lead for Resilience, Risk and State Capacity at the Adam Smith Institute (a UK think tank). He has worked in communications for Members of Parliament and Rishi Sunak’s Prime Ministerial campaign. He has a master’s in Global Politics from Durham Uni.

Noah Siegel

Google DeepMind

  • Noah is a Research Engineer at Google DeepMind. After his research at the Allen Institute for AI in Seattle, he worked on machine learning for robotics at DeepMind before switching to focus on language model reasoning and explanations as part of AI Safety and alignment research. Having studied computer science, economics, and maths as an undergraduate, he is now pursuing a PhD in Artificial Intelligence at University College London via the UCL-DeepMind joint PhD program.

Lara Thurnherr

Rhyme (AI governance research consultancy)

  • Lara studied History and Public Law at the University of Bern. She is the founder of Rhyme, a research consultancy focused on answering historical questions relevant to the challenges today of governing AI. She is also a Tech Diplomacy Affiliate at the Simon Institute for Longterm Governance, and recently worked on research for the grant-making foundation Open Philanthropy.

Vedaangh Rungta

University of Cambridge

  • Vedaangh is a first-year Computer Science student at the University of Cambridge. He has over half a decade of programming experience, with his current research interests revolving around the domain of deep learning and artificial intelligence. As well as being an alumnus of Leaf's first programme in 2021, he participated in the pilot Non-Trivial Fellowship, where he conducted research on AI Safety, produced a paper and was given a grant to pursue the research further. He also founded CodeSec, a community-interest initiative encouraging school students to experiment with industry standard software.

Jamie Harris

Leaf

  • After graduating with a first-class degree in history from the University of Oxford, Jamie taught history for several years at a Sixth Form college. He then joined the think tank Sentience Institute where he researched the moral consideration of AI entities, the history of AI rights research, and which psychological factors influence moral concern for AIs. He has co-founded and led multiple nonprofits providing impact-focused advice.

The opportunities don’t end after 5 weeks

Alumni from our recent Fellowships are being supported to pursue projects like:

  • Research projects into topics they were fascinated by during the course content focused on various risks from AI and opportunities to tackle them

  • A more in-depth investigation of the risks from AI through a Leaf-specific cohort of BlueDot Impact’s AI Safety Fundamentals courses

  • A podcast investigating effective altruism — what it is, how it can be applied, and what its limitations are


Plus taking follow-up opportunities* offered by Leaf like:

  • Weekly accountability calls to create a long-term career plan

  • Virtual work experience with a high-impact nonprofit

  • Up to £1,000 in grant funding plus mentorship for projects

*These follow-ups are applied for or earned, not guaranteed!

Plus, you’ll still have access to the online learning platform, the Discord channel, and your new friends after the 5 weeks end.

How our previous programmes helped

Where Leaf alumni are now

Oxford University

Cambridge University

Harvard University

London School of Economics

Secure a place in 15 minutes

The first stage of your application is a short written form (~15 minutes). We mostly ask for basic info about you to check eligibility. At this point, we will either invite you to participate as an Independent Learner with access to our platform for self-paced exploration, or proceed with your application for the full fellowship.

If you are invited to the final stage (~45 minutes), you’ll answer a small number of thought-provoking, open-ended questions relevant to the topics of the programme, plus a series of rapid-fire, brain teaser questions. (There’s no interview!)

More questions? See our “FAQ” page.