top of page

The Most Impactful Roles for Mitigating the Catastrophic Risks from Artificial Intelligence.

Introduction.

Methodology and important information

01

Introduction

The recent and rapid developments in the field of artificial intelligence (“AI”) mean that there is urgency to mitigate AI catastrophic risks, and a shift in the Overton window on various interventions to reduce catastrophic risks offer a unique momentum to do so. This endeavor requires, among other things, that more professionals who are aware of those risks and who are committed to helping mitigate them choose or change their career path accordingly. In turn, this requires identifying the most impactful roles for mitigating AI catastrophic risks to ensure that we can maximize the impact new contributors may have on these risks through their career. The set of people focused on this is small, particularly when compared to those who are focused on advancing capabilities of this model (one report from early 2023 estimates the former number to be 400 individuals). This number is growing quickly and interventions to reduce risks are becoming clearer. These developments suggest the need for a diverse set of interdisciplinary contributors. This report summarizes some of Successif’s findings on the AI market and job ecosystem so far and presents the roles that we have found to be the most impactful for mitigating the catastrophic risks from AI.

03

Key Geographic Areas Studied

The mitigation of AI catastrophic risks requires a collaborative approach involving all countries and major international organizations (e.g., the United Nations (UN) and the Organization for Economic Cooperation and Development (OECD)). As the development and deployment of AI technologies transcend national borders, it is crucial for countries to effectively work together and collectively address the challenges and potential dangers associated with AI. International cooperation is necessary as AI catastrophic risks threaten humanity’s future. Nonetheless, in order to amplify the impact of its findings, the current version of this report primarily focuses on the key areas most likely to have a crucial influence on the development and deployment of AI in the medium-term: the United States (“the U.S.”), the European Union (“the EU”), the United Kingdom (“the UK”), and Canada. The U.S., the EU, Canada, and the UK possess advanced technological capabilities and substantial resources for research and development in AI. The U.S., home to major tech giants and leading academic institutions, has been at the forefront of AI innovation for decades. In fact, major AI laboratories, such as OpenAI, Nvidia, and Anthropic, are based on American soil. Furthemore, the U.S. has an important global influence, which is also relevant in the field of AI, and the political enthusiasm for adopting legal measures specific to AI is getting stronger.

02

Methodology

This research involves a comprehensive approach, relying on a thorough literature review and legal landscape analysis, as well as research interviews with dozens of experts and professionals working in the AI field or in relevant positions. We reviewed articles from academic journals, industry reports, and reputable online sources to gather relevant literature on AI market trends, technologies, applications, and challenges. This literature review served as the foundation for understanding the current state of the AI market and identifying key positions for mitigating AI catastrophic risks. Furthermore, we examined the regulatory landscape, government initiatives, and legal frameworks pertaining to AI across key jurisdictions (specifically, the United States, the European Union, Canada, and the United Kingdom). This analysis helped in understanding the potential impact of policies on AI development, AI deployment incentives, AI adoption, ethical considerations, and market dynamics. By combining the insights derived from the literature review and legal analysis, a comprehensive understanding of the AI market was obtained, allowing for the identification of opportunities for impact through one’s career.

04

Categorization of Roles

We categorized the most impactful roles in five (5) categories: Policy and governmental roles, Standardization roles, AI auditing and evaluation roles, Organizational and “meta” roles, Research roles (AI governance and AI safety). This classification respects our core belief and observation that mitigating AI catastrophic risks is an endeavor that requires a collaborative multidisciplinary and multi-sectoral approach. Please note that the types or roles and specific jobs are not classified by order of importance.

Roles.

AI Auditing and Evaluation Roles

Research Engineer - Model Evaluations Tools

A software engineer specializing in AI model evaluations tools has a deep understanding of the needs of model evaluation and those of ML researchers. This position often involves close collaboration with ML researchers to define the scope and vision of tools needed to evaluate large AI models, including for detecting capabilities having potential to contribute to catastrophic risks. This role entails the identification of methods to integrate the issues surrounding the latest large models into evaluation tools, the performance of hands-on evaluations to pinpoint areas for evals improvement, the design of user-friendly interfaces for complex evaluations, and the application of data visualization techniques to simplify the understanding of intricate data. This role plays a pivotal part in enabling researchers to excel in their work and accelerate progress in the field of AI evaluation.

AI Auditing and Evaluation Roles

AI Auditor

An AI auditor is crucial to rigorously assess AI models and ensure that they comply with existing evaluation standards. This role demands thoughtful analysis to discern optimal, safe, and ethical actions for AI models in diverse scenarios. The AI Auditor may, for instance, use a web interface to simulate environments where models operate within text-based games, striving to acquire resources and replicate, and map the behaviors to qualitative and/or quantitative characterizations. While computer science proficiency is valuable, collaboration with technical experts may be an option for candidates with deep domain expertise, e.g. bioweapons. Proficiency in domains such as manipulation prevention, conventional and nonconventional weapons, or social engineering and hacking more broadly would be helpful. The AI auditor’s responsibilities may encompass simulating environments, prompting effective actions, assessing model recommendations, and contributing to a red-team/blue-team approach. Identifying essential yet challenging steps for AI models and exploring potential solutions are a core aspect of this role.

Organizational and “Meta” Roles

Product Manager

The Product Manager often bridges the gap between the technical team and other stakeholders. They are responsible for guiding the development of AI products. Their tasks usually include identifying customer needs, defining product requirements, and working with cross-functional teams to create and improve the organization’s products and/or services.

Organizational and “Meta” Roles

Chief Technology Officer (CTO)

A CTO leads the vision for the technological roadmap, development, and execution of AI products. They oversee the technology department including, and work closely with, the executive team, to develop and implement strategies to drive innovation, increase efficiency, and achieve business objectives. The CTO may also hold responsibilities at the intersection of technology advancement and risk management, and may contribute to ensuring that the technology systems and processes within the organization are compliant with standards the organization or product team wishes to adhere to.

Organizational and “Meta” Roles

Privacy Research Engineer

Privacy Research Engineers are key roles for ensuring that an organization maintains stringent privacy standards, including in its data collection, algorithms, products, and services. Such a professional specializes in designing and implementing the technology that safeguards user data, confidential training data, restricted inference data, and privacy. They play a critical role in developing and integrating privacy features into various software and applications.

Organizational and “Meta” Roles

Data Security Specialists

Data security roles contribute to ensuring the security of the organization’s information systems and platforms. Individuals in data security roles are often responsible for overseeing information security, cybersecurity and IT risk management programs based on industry-accepted information security and risk management frameworks. They contribute to the continuous development, implementation and updating of security and privacy policies, standards, guidelines, baselines, processes and procedures in compliance with the law and with the best practices.

Organizational and “Meta” Roles

AI Security Engineer

A Software Security Engineer is usually a key member of an organization’s security team. In major AI organizations, an additional expertise beyond traditional software cybersecurity, also involves securing the integrity of the output of the AI model produced by the organization. This involves Modeling/Detecting/Mitigating vulnerabilities associated with the fact that customers will interact with the model, potentially introducing adversarial input leading to problematic and dangerous behavior of the model. They usually possess substantial expertise in conducting security assessments (evaluating code, architecture, model mechanistic interpretability, threat modeling, etc.) to understand the attack surface of complex systems, and review and offer recommendations for their mitigation. A Software Security Engineer will work, among other things, on incorporating security measures into AI systems at the algorithm, software, and infrastructure levels to minimize risks from emerging threats.

Organizational and “Meta” Roles

Chief Operations Officer (COO)

The role of COO is often a senior leadership position. The COO is responsible for overseeing all of the aspects of the organization’s work that aren’t research, including human resources, finance, development, communications, legal risk and compliance, etc. This role is a highly autonomous position. The individual regularly makes high-level decisions about the direction of the organization’s operations and projects, weighs in on key organizational strategic priorities, and helps shape the direction of the organization’s work.

Policy and Governmental Roles

Staff Member, Senate Committee or Office (in the U.S.)

Roles in the Senate (staff member) vary in their level of seniority, required Capitol Hill experience, and potential. Within the Senate, the aforementioned roles are the ones most likely to yield influence pertaining to AI policies.

Policy and Governmental Roles

Staff Member, Department for Science, Innovation and Technology - Office for Artificial Intelligence (UK)

The role of Staff Member at the DSIT, and more specifically at its Office for Artificial Intelligence, may provide a unique opportunity to influence the advancements in AI policies and to help coordinate the implementation of the UK National AI Strategy. Numerous positions are considered as Staff Member, and those can encompass policy advisors, project leads, and heads of strategy. These professionals are responsible for supporting and coordinating initiatives related to AI, innovation, and technological progress. Their duties encompass a wide range of activities, including policy development, research, stakeholder engagement, and strategic planning.

Policy and Governmental Roles

Civil Servant (in the UK)

Civil servants operate with varying statuses, either as career bureaucrats or regular employees, and are situated within different organizations of the UK government, such as ministries, ministerial cabinets, or public agencies. Civil servants in the UK are generally categorized into three main types: (1) operational delivery, (2) cross-departmental specialisms, and (3) departmental specialisms. Cross-departmental specialisms encompass roles essential across all government departments, including government-specific positions like policy experts which may be particularly impactful for influencing AI policy. Note that this job does not necessarily require UK citizenship.

Policy and Governmental Roles

Staff Member, European Commission or European Parliament (in the EU)

A Staff Member in the European Commission or European Parliament is a civil servant or administrative professional who works within the EU's legislative or executive branches, respectively. They may engage in tasks related to policy development, legislative processes, research, and administrative support, assisting in the creation and implementation of EU laws and regulations while contributing to the functioning of these institutions. The term “Staff member” is broad and includes numerous specific roles.

Policy and Governmental Roles

Policy Analyst and Policy Advisor

Although these two roles are distinct in the realm of public policy and governance, they also often overlap significantly, which is the reason we decided to address them simultaneously. Generally, the difference lies in the fact that policy analysts primarily focus on conducting in-depth research, data collection, and analysis to provide evidence-based insights into policy issues, whereas policy advisors play a more strategic role, offering guidance and recommendations to decision-makers on policy options and their potential impacts. While both roles contribute to informed policymaking, policy analysts are generally research-oriented, whereas policy advisors often provide strategic counsel and navigate the complexities of policy implementation. Policy analysts and policy advisors both collaborate closely with project leaders to provide project management, research support and analysis support. Analysts engage in various tasks critical to the given agency's or think tank’s mission, including conducting specialized analyses and studies, engaging with governmental stakeholders, and supporting policymaking efforts in the field of AI safety and governance. A policy analyst and a policy advisor’s work contributes in shaping the decisions and strategies of policymakers within the government, and their influence can extend to guiding the trajectory of policy development. Specifically within the realm of mitigating AI catastrophic risks, these professionals serve as essential bridges between rigorous research and the practical application of policies and regulations.

Research Roles (AI Governance and AI Safety)

AI Governance Researcher and AI Policy Researcher

Although distinct, the roles of AI governance researcher and of AI policy researcher often have important overlap, which is why we present them together. Essentially, AI governance researchers and AI policy researchers investigate what it may take, governance and policy-wise, to prepare for a world with advanced AI. More specifically, an AI governance researcher is likely to focus on establishing frameworks to guide AI development, deployment, and use, to address issues such as safety and transparency. An AI policy researcher, on the other hand, is likely to delve into the legal and regulatory considerations pertaining to AI, and formulate policies and recommendations for governments and regulatory bodies. Both roles focus on researching actions and strategies that decision-makers and key actors, both within government and industry, should pursue in order to mitigate the catastrophic risk from AI.

Research Roles (AI Governance and AI Safety)

Theoretical AI Alignment Researcher

An AI alignment researcher works on researching how we can better ensure that advanced AI systems behave as intended and in alignment with human interests, even when not under direct human supervision. This entails conducting theoretical alignment research, which is often conceptual, algorithmic, or mathematical. AI alignment work may sometimes also involve hands-on research engineering tasks, depending on the organization and role. Such work generally requires thinking about potential behaviors of AI systems with a security mindset, and thus focuses on investigating topics like interpretability, value learning, inner alignment, corrigibility, etc.

Research Roles (AI Governance and AI Safety)

AI Safety Research Scientist (ML Research)

An AI Safety Research Scientist working within ML works on developing methodologies and strategies to ensure the safety, controllability, value alignment, and robustness of AI systems. They often conduct in-depth research and formulate new ideas in machine learning or work on improving existing ones, an endeavor which usually involves mixes of algorithms design, theory, programming, a security mindset, and ML engineering. An AI Safety Research Scientist helps establish and/or execute the research trajectory to enhance the safety, alignment, and resilience of AI systems against adversarial or malicious applications, as well as against accidental rogue actions and broader sociotechnical systemic failures. Note that this job sometimes overlaps with the role of AI Safety Research Engineer, since that job partially consists in implementing the findings from AI safety research scientists.

Research Roles (AI Governance and AI Safety)

AI Safety Research Engineer (ML Research Implementation)

An AI safety research engineer focuses both on the practical implementation of existing AI safety techniques as well as development support for researchers while researching new techniques, which are often elaborated by AI safety research scientists. This purview includes overseeing and contributing to scientific experiments, building technique-specific software infrastructure, developing benchmarks or tests, designing API interfaces, optimizing and improving safety-related algorithms, proposing new safety ideas, and refining safety tools. Note that this job sometimes overlaps with the role of AI safety research scientist, since it consists in part in implementing the findings of AI safety research scientists.

Standardization Roles

Standardization Body, Committee Member

A committee member at a standardization body, such as CEN/CENELEC Joint Technical Committee on Artificial Intelligence (JTC 21) (in the EU) contributes their technical and/or industry expertise to the development and maintenance of robust AI standards. They take part in the committee’s work and actively engage in building consensus among diverse stakeholders, fostering collaboration among experts, industry representatives, and policymakers to create effective and comprehensive AI standards. Please note that this role is usually held in addition to a main job position.

bottom of page