top of page
hero v2.png

AI Policy Strategy Fellowship

Returning for its second year | Starting the week of May 25, 2026 

AI safety is not only a technical challenge. It is a political one. This fellowship equips committed professionals with the skills, frameworks, and community needed to increase their impact through policy changes that meaningfully protect society and support long-term beneficial AI development.

Curriculum Overview
| Program Presentation

Are any of these challenges familiar to you?

echo chamber

Your research is brilliant, but your only audience is a handful of peers who already agree with you.

political black box

You want to advocate for your findings, but you do not know how to access or navigate the policymaking community.

neutrality trap

You avoid making a public policy recommendation for fear of being seen as an activist rather than an objective expert.

Our fellowship is designed to support you with

Guidance from policy experts

​

The fellowship is led by Claire Boine, Founder of Successif and Co-Chair in Technology Law and Digital Policy at the European University Institute's School of Transnational Governance. Claire brings deep experience spanning academia, AI governance, and public policy, with a strong track record of engaging policymakers, institutions, and the public on the governance of advanced technologies. Our curriculum was developed from her years of expertise and in collaboration with other experts in the field.

 

​Tools to create and execute strategies that impact policy

 

Through our program, you will gain:

​

  • the ability to look at an AI safety problem and map out a concrete path from research insight to policy action, whether through legislation, regulatory guidance, executive action, or international coordination​

  • confidence in choosing when to work within institutions and when to build public pressure, and the skills to do both without compromising your credibility as an expert​

  • shared strategic playbook developed with your cohort, so your policy efforts reinforce each other instead of duplicating or contradicting one another​

  • tested theory of change for your own work, refined through feedback from your peers and the Successif fellowship team, who will challenge your assumptions to strengthen impact

What you will accomplish

People.png

Create a coordinated strategy with your peers

You won't be doing this work alone. You'll join a cohort of professionals who are serious about shaping safer AI. This network will challenge you, support you, and help you expand your impact far beyond the fellowship. You will work together, creating a coalition to strategically influence AI policy.

Successif_icon.png
ChatText.png

Write an op-ed

This fellowship gives you a step-by-step process for turning your expertise into a clear, compelling, ready-to-publish article that policymakers and the public can understand. By the end, you will have a polished op-ed that communicates your ideas with confidence and authority. Judges will review fellows' op-eds, and the top three authors will be awarded a monetary prize.

LightningCharge.png

Complete a capstone project

You bring ambition and domain expertise. We bring the structure, coaching, and real-world guidance needed to turn your idea into a credible, high-impact project for safer AI. You'll receive personalised feedback, expert support, and targeted mentorship so your project is ready to advance your organisation's goals or your own career trajectory. The top three project winners will be awarded a monetary prize, as determined by our judge panel.

What our fellows say

"It helped me refine how I think about AI risk, policy, and advocacy, and especially how to communicate those ideas. The program was great preparation for moving into full-time AI policy work. I’m also still in touch with several fellows and mentors from Successif who are doing thoughtful, serious work across the field."

Research Scholar, a leading university

Program Overview

This free program is designed for professionals at the intersection of AI safety, policy, and public discourse, including:​​ technical or governance researchers, AI safety organization leaders, and policy analysts.​​​​ 

 

The fellowship is structured into two distinct stages designed to translate expertise into strategic influence:

​​

  • Phase 1: Intensive Content & Workshops (10 Weeks) Deep dive into collaborative exercises and high-level workshops, building the foundational strategy and network needed for policy impact.

  • Phase 2: Independent Capstone Project (7 Weeks) Apply your expertise to a self-directed project with continued access to the Successif team through 1-on-1 meetings and weekly office hours.

​

Designed for active professionals, the total time commitment is approximately 4 hours per week across both phases.​

Weekly Schedule

Webpage - 2026 Content Overview (1) (1).png

Ready to take the next step to create a meaningful impact in AI policy?

Successif_white.png
  • LinkedIn

Contact us

Successif is a project of Players Philanthropy Fund, a Texas charitable trust recognized by the IRS as a tax-exempt public charity under Section 501(c)(3) of the Internal Revenue Code (Federal Tax ID: 27-6601178, ppf.org/pp). Contributions to Successif qualify as tax-deductible to the fullest extent of the law.

bottom of page