GenAI Policy Enforcement Manager, YouTube

GoogleApplyPublished 20 hours agoFirst seen 20 hours ago
Apply

Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.

Fast-paced, dynamic and proactive, YouTube Trust and Safety is dedicated to ensuring that YouTube is a place for users, viewers, and creators around the world to create, and express themselves. Whether understanding and solving their online content concerns, navigating within global legal frameworks, or writing and enforcing worldwide policy, the Trust and Safety team is on the frontlines of enhancing the YouTube experience, building internet safety, and protecting free speech in our ever changing digital world.

In this role, you will navigate a complex policy stack to determine the safety stance of wide outputs, including video, image, and text. You don't just enforce existing rules; you will actively interpret and evolve our policy frameworks to keep pace with the rapid advancements in the GenAI space.

At YouTube, we believe that everyone deserves to have a voice, and that the world is a better place when we share, and build community through our stories. We work together to give everyone the power to share their story, explore what they love, and connect with one another in the process. Working at the intersection of cutting-edge technology and boundless creativity, we move at the speed of culture with a shared goal to show people the world. We explore new ideas, solve real problems, and have fun — and we do it all together.

The US base salary range for this full-time position is $132,000-$194,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.

Responsibilities

  • Orchestrate enforcement strategies and operational workflows for emerging YouTube GenAI products.
  • Evaluate novel risk vectors—often without historical precedence—and influence their integration into the broader Trust and Safety mitigation roadmap. Define and manage the key performance indicators (KPIs) that measure the health and effectiveness of enforcement operations.
  • Develop a keen understanding of various business models and the underlying technical implementations of GenAI features to ensure policy alignment.
  • Navigate and master an evolving policy stack, determining the safety stance for multi-modal outputs (e.g., video, image, and text). Directly engage with and review graphic, controversial, or offensive content to ensure accurate application of GenAI policies.
  • Synthesize complex data into high-level executive materials and recommendations to guide critical business and safety decisions. Manage high-priority escalations with rapid turnaround times, defining action plans and providing real-time updates to executive leadership.

Minimum qualifications:

  • Bachelor's degree or equivalent practical experience.
  • 7 years of experience in data analytics, Trust and Safety, policy, cybersecurity, or related fields.

Preferred qualifications:

  • Master's degree or PhD in a relevant field.
  • Experience working with or managing safety operations for AI/ML models, including an understanding of generative AI technical implementations and risks.
  • Experience with thought leadership in an analytical anti-abuse environment (e.g., content abuse, spam, or behavioral abuse) and with large-scale content moderation.
  • Excellent business judgment and communication skills, with the ability to build relationships and influence cross-functional partners across geographies to drive strategy and business action.
  • Excellent strategic and problem-solving skills, with the ability to frame complex problems, structure data-driven analyses, and lead a team to drive tangible impact.