AI Offense Defense Dynamics Lead Researcher
Job Description
Shape the Future of AI Safety: Lead Research on Offense-Defense Dynamics
Are you passionate about ensuring AI benefits humanity? The Center for AI Risk Management & Alignment (CARMA) is seeking a highly motivated and skilled researcher to lead groundbreaking work in understanding the complex interplay between AI’s potential for good and its inherent risks. As an AI Offense-Defense Dynamics Research Lead, you will be at the forefront of defining how we evaluate and govern increasingly powerful AI systems.
About CARMA: Navigating the Risks of Transformative AI
CARMA’s mission is to mitigate the risks posed by advanced AI to humanity and the biosphere. We achieve this by:
- Grounding AI risk management in rigorous analysis.
- Developing policy frameworks that address Artificial General Intelligence (AGI).
- Advancing technical safety approaches.
- Fostering global perspectives on durable safety.
Join us in providing critical support to society for managing the outsized risks from advanced AI before they materialize. We RELISH the opportunity to welcome new talent to our team!
Responsibilities: Deciphering AI’s Societal Impact
As the AI Offense-Defense Dynamics Research Lead, you will:
- Develop quantitative system dynamics models that capture the intricate relationships between technology, society, and institutions to understand AI risk landscapes.
- Design analytical models and simulations to pinpoint critical intervention points where policy can effectively shift the offense-defense balance toward safer outcomes.
- Expand and operationalize our offense/defense dynamics taxonomy and framework, creating metrics and models to predict whether AI system features favor beneficial or harmful applications.
- Build empirically-informed frameworks using documented cases of AI misuse and positive deployments to validate our theoretical models.
- Investigate how technical characteristics (breadth, depth, accessibility, adaptability) interact with sociotechnical contexts to determine offense-defense dynamics.
- Communicate your findings through blog posts, articles, conference talks, and media engagement to enhance public understanding.
- Create tools and methodologies to assess new AI models for their likely offense-defense implications upon release.
- Draft evidence-based guidance for AI governance that considers the complex interdependencies between technological capabilities and deployment contexts.
- Translate research into actionable recommendations for policymakers, AI developers, security professionals, and standards organizations.
Qualifications: Your Expertise in Action
We’re looking for candidates with:
- A M.Sc. or higher in Computer Science, Cybersecurity, Criminology, Security Studies, AI Policy, Risk Management, or a related field.
- Proven experience with complex systems modeling, risk assessment methodologies, or security analysis.
- A strong understanding of dual-use technologies and factors that influence their offensive or defensive applications.
- A deep understanding of modern AI systems (large language models, multimodal models, autonomous agents) and the ability to analyze their architectures and capability profiles.
- Experience in Security, Safety engineering, AI governance, Operational risk management, Systems dynamics modeling, Network theory, Complexity science, Adversarial analysis, or Technical standards development.
- The ability to develop qualitative frameworks and quantitative models that capture sociotechnical interactions, with comfort in creating semi-quantitative semi-empirical models.
- A record of relevant publications or research contributions related to technology risk, governance, or security.
- Exceptional analytical thinking and the ability to identify non-obvious path dependencies and feedback loops in complex systems.
Bonus Points: Stand Out From the Crowd
The following skills and experience are highly valued:
- A PhD in a relevant field.
- Experience with system dynamics modeling, hypergraph techniques, or complex network analysis methods.
- Skills in developing interactive tools or dashboards for risk visualization and communication.
- A background in interdisciplinary research bridging technical and social science domains.
- Demonstrated aptitude in top-down techniques and first-principles thinking.
- Experience quantifying qualitative risk factors or developing proxy metrics for complex phenomena.
- A background in compiling and analyzing incident databases or case studies for pattern recognition.
- Familiarity with empirical approaches to technology assessment and impact prediction.
- Knowledge of international relations theory as it applies to technology proliferation dynamics.
Compensation and Benefits
The salary range for this role is $125,000 – $200,000 per year, plus excellent benefits for U.S. employees.
Equal Opportunity Employer
CARMA/SEE is an Equal Opportunity Employer committed to inclusivity and diversity. We encourage applications from all qualified individuals.
To Apply
Please mention the word RELISH and tag RMzguNjguMTM0LjE5NA== when applying to show you read the job post completely (#RMzguNjguMTM0LjE5NA==).
“
Similar Jobs



