The recent and rapid developments in the field of artificial intelligence (“AI”) mean that there is urgency to mitigate AI catastrophic risks, and a shift in the Overton window on various interventions to reduce catastrophic risks offer a unique momentum to do so. This endeavor requires, among other things, that more professionals who are aware of those risks and who are committed to helping mitigate them choose or change their career path accordingly. In turn, this requires identifying the most impactful roles for mitigating AI catastrophic risks to ensure that we can maximize the impact new contributors may have on these risks through their career. The set of people focused on this is small, particularly when compared to those who are focused on advancing capabilities of this model (one report from early 2023 estimates the former number to be 400 individuals). This number is growing quickly and interventions to reduce risks are becoming clearer. These developments suggest the need for a diverse set of interdisciplinary contributors. This report summarizes some of Successif’s findings on the AI market and job ecosystem so far and presents the roles that we have found to be the most impactful for mitigating the catastrophic risks from AI.
This research involves a comprehensive approach, relying on a thorough literature review and legal landscape analysis, as well as research interviews with dozens of experts and professionals working in the AI field or in relevant positions. We reviewed articles from academic journals, industry reports, and reputable online sources to gather relevant literature on AI market trends, technologies, applications, and challenges. This literature review served as the foundation for understanding the current state of the AI market and identifying key positions for mitigating AI catastrophic risks. Furthermore, we examined the regulatory landscape, government initiatives, and legal frameworks pertaining to AI across key jurisdictions (specifically, the United States, the European Union, Canada, and the United Kingdom). This analysis helped in understanding the potential impact of policies on AI development, AI deployment incentives, AI adoption, ethical considerations, and market dynamics. By combining the insights derived from the literature review and legal analysis, a comprehensive understanding of the AI market was obtained, allowing for the identification of opportunities for impact through one’s career.
The following roles are not ranked and their impact relative to one another is not presented in this report for multiple reasons. First, when it comes to AI governance, policy change requires having aligned individuals placed at multiple key organizations that work in concert with one another. Therefore, spread is important and ranking roles could be counterproductive. Second, the relative impact of each type of role also depends on the specific organization, time, and context.
This public report only presents a subset of our research. For each of the roles, we also have a list of required qualifications, a theory of change, and other information that we use internally to help our program participants. If you are interested in working in one of the roles, we encourage you to apply for our AI program.
Please note that we expect most discussed roles to also be impactful in other countries. We are currently expanding our AI market research to non-Western key areas.
Key Geographic Areas Studied
The mitigation of AI catastrophic risks requires a collaborative approach involving all countries and major international organizations (e.g., the United Nations (UN) and the Organization for Economic Cooperation and Development (OECD)). As the development and deployment of AI technologies transcend national borders, it is crucial for countries to effectively work together and collectively address the challenges and potential dangers associated with AI. International cooperation is necessary as AI catastrophic risks threaten humanity’s future. Nonetheless, in order to amplify the impact of its findings, the current version of this report primarily focuses on the key areas most likely to have a crucial influence on the development and deployment of AI in the medium-term: the United States (“the U.S.”), the European Union (“the EU”), the United Kingdom (“the UK”), and Canada.
The U.S., the EU, Canada, and the UK possess advanced technological capabilities and substantial resources for research and development in AI. The U.S., home to major tech giants and leading academic institutions, has been at the forefront of AI innovation for decades. In fact, major AI laboratories, such as OpenAI, Nvidia, and Anthropic, are based on American soil. Furthemore, the U.S. has an important global influence, which is also relevant in the field of AI, and the political enthusiasm for adopting legal measures specific to AI is getting stronger.
Similarly, the EU has made substantial investments in AI research and development, with initiatives such as the European AI Strategy and the Horizon Europe program. In addition, the EU is a legal pioneer, as the European Commission proposed in 2021 the world’s first AI law, the EU AI Act. Experts generally agree that the EU AI Act will likely have a global impact through what is called the Brussels effect.
The United Kingdom (“the UK”), may be a particularly impactful key area, as the current British prime minister, Rishi Sunak, wishes to pitch the UK as a frontrunner in global AI governance and has been taking concrete steps in this direction. For instance, in June 2023, the UK and the U.S. announced the Atlantic Declaration for a Twenty-First Century U.S.-UK Economic Partnership (“the Atlantic Declaration”), aimed to reinforce the alliance between the two countries. The Atlantic Declaration sets out an action plan to reach goals including “Ensuring U.S.-UK Leadership in Critical and Emerging Technologies” and “Accelerating [the two countries’] cooperation on AI¹”. Furthermore, the UK has created in April 2023 its Frontier AI Taskforce, “a start-up inside government, delivering on the ambitious mission given to us by the Prime Minister: to build an AI research team that can evaluate risk at the frontier of AI²”. Among other things, the Taskforce announced in September 2023 that it has established an expert advisory board spanning AI Research and National Security. Its members include Yoshua Bengio and Paul Christiano. The UK will also host the first global summit of states on AI safety in November 2023, which will “will bring together key countries, leading tech companies and researchers to agree safety measures to evaluate and monitor the most significant risks from AI³”. Finally, the UK is an important research and innovation hub, being home to organizations such as Google DeepMind, and is actively seeking to put into place strong regulatory frameworks, which may in turn strongly influence subsequent legal measures adopted by other countries.
Lastly, Canada is an important actor in the AI ecosystem. It is at the forefront of developing comprehensive legal rules pertaining to AI. In June 2022, the Canadian government introduced a proposal for the Law on Artificial Intelligence and Data (AIDA) as part of Bill C-27, the Digital Charter Implementation Act. Although this bill has generated considerable criticism since its introduction and its comprehensiveness has been put into question on several occasions, it still demonstrates the political enthusiasm of the Canadian government to thoroughly regulate AI. It sheds light as well on opportunities to act now in order to influence these policies and in turn potentially influence the legal and political actions of other governments that may follow Canada’s lead. In addition, the Canadian federal government, as well as Canadian provinces, offer several interesting measures to support startups looking to expand within Canada, whether through incubators or advantageous tax incentives. Coupled with more lax immigration measures compared to the U.S. (e.g., concerning work permits), this makes Canada an attractive country for AI companies and labs to begin or expand their activities there, and many have. Moreover, Canada is home to prominent AI researchers, such as Geoffrey Hinton and Yoshua Bengio, and has a dynamic AI ecosystem, with important AI organizations, such as MILA.
Categorization of Most Impactful AI Roles
We categorized the most impactful roles in five (5) categories:
Policy and governmental roles
AI auditing and evaluation roles
Organizational and “meta” roles
Research roles (AI governance and AI safety)
This classification respects our core belief and observation that mitigating AI catastrophic risks is an endeavor that requires a collaborative multidisciplinary and multi-sectoral approach.
Please note that the types or roles and specific jobs are not classified by order of importance.
¹ “The Atlantic Declaration: A Framework for a Twenty-First Century U.S.-UK Economic Partnership”, The White House, Briefing Room, Statements and Releases, June 08, 2023, online: < https://www.whitehouse.gov/briefing-room/statements-releases/2023/06/08/the-atlantic-declaration-a-framework-for-a-twenty-first-century-u-s-uk-economic-partnership/#:~:text=Today%2C%20we%20are%20announcing%20the,chains%20and%20reduce%20strategic%20dependencies. > (accessed 14.08.2023); The Statement underlines that the cooperation on AI will have “a focus on ensuring the safe and responsible development of the technology”.
² “Independent report - Frontier AI Taskforce: first progress report”, Gov.UK, Independent Report, 7 September 2023, online: < https://www.gov.uk/government/publications/frontier-ai-taskforce-first-progress-report/frontier-ai-taskforce-first-progress-report#:~:text=The%20Taskforce%20is%20a%20start,they%20may%20significantly%20augment%20risks. > (accessed 18.09.2023).
³ “UK to host first global summit on Artificial Intelligence”, Gov.UK, press release, 7 June 2023, online: < https://www.gov.uk/government/news/uk-to-host-first-global-summit-on-artificial-intelligence > (accessed 14.08.2023).