Technical Program Manager, AI Propagation and Tooling
Google's projects, like our users, span the globe and require managers to keep the big picture in focus while being able to dive into the unique engineering challenges we face daily. As a Technical Program Manager at Google, you lead complex, multi-disciplinary engineering projects using your engineering expertise. You plan requirements with internal customers and usher projects through the entire project lifecycle. This includes managing project schedules, identifying risks and clearly communicating them to project stakeholders. You're equally at home explaining your team's analyses and recommendations to executives as you are discussing the technical trade-offs in product development with engineers.
Using your extensive technical and leadership expertise, you manage various Engineering-specific programs and teams.
We are looking for a leader with technical, business, and program management skills to bridge the gap between human-centric safety policies and Google's AI capabilities. In this role, you will lead a specialized team dedicated to the propagation of AI systems (agentic and non-agentic) across our safety ecosystem.
At Google we work hard to earn our users’ trust every day. Trust & Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.The US base salary range for this full-time position is $183,000-$271,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.
Responsibilities
- Partner with Product, Engineering, and Enforcement to deploy AI-driven solutions, automating complex moderation and fraud workflows while maintaining systems within an AI-centric ecosystem.
- Lead the design and maintenance of autonomous AI agents that handle complex policy investigations to enhance user protection and experience without manual intervention.
- Support teams in testing AI safety tools to ensure system functionality for FTEs and xWF, including oversight of bugs and feature launches.
- Act as the primary connective tissue between Engineering, Product, and Enforcement to translate goal into prioritized roadmaps.
- Drive alignment across xWF, FTEs, and technical teams by leading training on emerging AI tools and managing the critical feedback loops necessary to refine performance.
Minimum qualifications:
- Bachelor’s degree in Computer Science, Data Science, or a related engineering field, or equivalent practical experience in technical systems.
- 8 years of experience in Technical Program Management.
- 5 years of experience in leadership role with/without direct reports.
Preferred qualifications:
- 2 years of experience in Artificial Intelligence/Machine Learning (AI/ML) or automated Risk/Fraud environments.
- Experience building "loops" where AI tools learn from human-in-the-loop (HITL) feedback.
- Experience managing a team of technical experts at an organization in the technology sector.
- Understanding of the Large Language Model (LLM) lifecycle (training, fine-tuning, RLFH, and prompt engineering) plus new and emerging capabilities of Gemini in particular and AI systems in general.
- Ability to speak the language of Engineering and Data Science to define technical requirements for "agentic" safety tools and translate that to business speak for non-technical teams.