Ai Safety Research Scientist
Posted on Dec. 20, 2024 by Partnership on AI
- San Francisco, United States of America
- $126142.0 - $159724.0
- Full Time
We work with our Partners on the voluntary adoption of best practices and with governments to advance policy innovation. Here's how we work:
- Convening diverse stakeholders across the world, including experts and communities most impacted by AI
- Creating influential, accessible resources and recommendations to shape responsible AI development
- Publishing progress reports that track improvements in Partner practices and policy developments
ROLE SUMMARY
In this role, you'll lead technical research to assess and enhance feasibility of governance options being explored in public policy discussions on AI/AGI safety. This includes evaluating the effectiveness of different interventions, developing options to enhance implementation, and proposing practical mechanisms for oversight and compliance. Working alongside the Head of AI Safety program, you'll lead research that informs how we govern increasingly capable AI systems.
Examples of past PAI work include:
- Creating scalable guidelines that tailor safety practices for general-purpose AI systems
- Developing evidence-based recommendations for synthetic media governance through case studies with industry partners
Future technical work will include:
- Developing industry guidelines and technical frameworks for monitoring AI agents, considering tradeoffs between transparency, privacy, implementation costs, and user trust
- Contributing to open problems in technical AI governance
You'll collaborate with diverse stakeholders to advance consensus around governance approaches that work in practice, helping decision-makers in industry and government understand when and how to intervene in advanced AI development. The role can be performed remotely from anywhere in the US or Canada.
- Lead research that connects technical analysis with policy needs, identifying technical challenges underlying AI/AGI safety discussions
- Propose governance interventions that could span different layers - from model safety to supply chain considerations to broader societal resilience measures
- Use a multistakeholder organizations’ tools - rigorous analysis, public and private communications, working groups, and convenings - to gather insights from Partners on AI development processes to ensure research outputs are practical and impactful
- Author/co-author research papers, blogs, and op-eds with PAI staff and Partners, and share insights at leading AI conferences like NeurIPS, FAccT, and AIES
Project Management and Stakeholder Engagement
- Lead technical research workstreams with high autonomy, supporting the broader AI safety program strategy
- Build and maintain strong relationships across PAI's internal teams and Partner community to advance research objectives
- Represent PAI in key external forums, including technical working groups and research collaborations
Strategic Communication and Impact
- Translate complex technical findings into clear, actionable recommendations for AI safety institutes, policymakers, industry partners, and the public
- Support development of outreach strategies to increase adoption of PAI's AI safety recommendations
ABOUT YOU
- PhD or MA with three or more years of research or practical experience in a relevant field (e.g., computer science, machine learning, economics, science and technology studies, philosophy)
- Strong understanding of technical AI landscape and governance challenges, including safety considerations for advanced AI systems
- Demonstrated ability to conduct rigorous technical governance research while considering broader policy and societal implications
- Excellent communication skills, with proven ability to translate complex technical concepts for different audiences
- Track record of building collaborative relationships and working effectively across diverse stakeholder groups
- Adaptable and comfortable working in a dynamic, mission-driven organization
FOLLOWING COULD BE AN ADVANTAGE
- Experience at frontier AI labs or tech companies (AI safety experience not required; we welcome those with ML, product, policy or engineering backgrounds) or government agencies working on AI-related areas
- Subject matter expertise from relevant areas such as:
- AI system Trust & Safety (e.g., developing monitoring systems, acceptable use policies, or safety metrics for large language models)
- Privacy-preserving machine learning and differential privacy
- Cybersecurity, particularly vulnerability assessment and incident reporting
QUALITIES THAT ARE IMPORTANT TO US:
- Builds Trust: Able to be transparent by being authentic, conveying trust and communicating openly while involving key stakeholders in decision-making.
- Visionary: Able to take a long-term perspective, conveying a belief in an outcome, and displaying the confidence to reach goals
- Inspirational: Able to inspire and motivate others in a positive manner
- Courageous: Able to seek out opportunities for continuous improvement, and fearless in intervening in challenging situations
- Decisive: Able to make informed decisions in a timely fashion
- Personal Development: Able to seek opportunities for individual personal development
ADDITIONAL INFORMATION
- We know that research has shown that some potential applicants will submit an application only when they feel that they have met close to all of the qualifications for a role—we encourage you to take a leap of faith with us and submit your application as long as you are passionate about working to make real impact on responsible AI. We are very interested in hearing from a diverse pool of candidates.
- PAI offers a generous paid leave and benefits package, currently including: Twenty vacation days; three personal reflection days; sick leave and family leave above industry standards; high-quality PPO and HMO health insurance plans, many 100% covered by PAI; Dental and vision insurance 100% covered by PAI; up to a 7% 401K match, vested immediately; pre-tax commuter benefits (Clipper via TriNet); automatic cell phone reimbursement ($75/month); up to $1,000 in professional development funds annually; $150 per month to access co-working space; regular team lunches & focused work days; opportunities to attend AI related conferences and events and to collaborate with our roughly 100 partners across industry, academia and civil society. Please refer to our careers page for an updated list of benefits.
- Must be eligible to work in the United States or Canada; we are unable to sponsor visas at this time.
- PAI is headquartered in San Francisco, with a global membership base and scope. This role is eligible for remote work within the United States and Canada with no requirement to be located in San Francisco.
PAI is proud to be an equal opportunity employer. We celebrate diversity and we are committed to creating an inclusive environment in all aspects of employment, including recruiting, hiring, promoting, training, education assistance, social and recreational programs, compensation, benefits, transfers, discipline, and all privileges and conditions of employment. Employment decisions at PAI are based on business needs, job requirements, and individual qualifications.
PAI will consider for employment qualified applicants with criminal histories, in a manner consistent with the San Francisco Fair Chance Ordinance or similar laws.
The Partnership on AI may become subject to certain governmental record keeping and reporting requirements for the administration of civil rights laws and regulations. We also track diversity in our workforce for the purpose of improving over time. In order to comply with these goals, the Partnership on AI invites employees to voluntarily self-identify their gender and race/ethnicity. Submission of this information is voluntary and refusal to provide it will not jeopardize or adversely affect employment or any consideration you may receive for employment or advancement. The information obtained will be kept confidential.
HOW TO APPLY
- Resume and/or CV
- Cover Letter reflecting on your motivation for the role and experiences illustrating fit
- 2-5 pages writing sample (please append to cover letter): this can be any writing for which you were primarily an author and does not have to be about any topic related to AI safety
Applications for this job posting will be accepted until 11:59pm PT January 20, 2025.
Advertised until:
Jan. 19, 2025
Are you Qualified for this Internship Role?
Click Here to Tailor Your Resume to Match this Job
Share with Friends!
Similar Internships
No similar Intern Jobs at the Moment!