Policy Specialist, Legal Content Policy and Standards, Cloud

GoogleApplyPublished 21 hours agoFirst seen 21 hours ago
Apply

Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.

As a Policy Specialist (Google Cloud Platform), you will be a thought leader on Cloud related content moderation issues, and create policies that balance compliance with local law, user expression, and public interest. You’ll manage removal requests from government stakeholders and users while building expertise in the areas of hate speech, online cyberbullying etc. You’ll manage content operations to ensure timeliness and quality. You will collaborate with stakeholders including but not limited to public policy, product policy, legal and enforcement teams who all work together on policy issues affecting Google and its users. Working with these cross-functional stakeholders, you’ll develop policies for new content moderation legislation (e.g., copyright issues, defamation, data protection, and government requests).

You have strong leadership, communication, project management, and people management skills. You have the ability to navigate ambiguity and operate at the highest level of integrity when making balanced decisions.

At Google we work hard to earn our users’ trust every day. Trust & Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.

The US base salary range for this full-time position is $105,000-$151,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.

Responsibilities

  • Be a thought leader for legal removals issues by identifying and analyzing content moderation trends and key policy issues affecting the Google Cloud Platform. Lead discussions with cross-functional stakeholders on emerging regulations and its implications on legal removals landscape at Google.
  • Manage content operations and drive escalations of sensitive government requests, by liaising with Product, Policy, Communications, and Legal teams. Balance various legal considerations in order to effectively resolve issues.
  • Develop and launch scalable policies and guidelines for handling large volumes of requests, based on different regional trends and legal issues (data protection, copyright, defamation, etc.).
  • Participate in the on call rotation schedule to manage high priority escalations that may occur outside of standard non work hours including weekends/holidays.
  • Work with graphic, controversial, or upsetting content.

Minimum qualifications:

  • Bachelor's degree or equivalent practical experience.
  • 2 years of experience in a policy, legal, trust and safety, or technology environment.
  • Experience with data analysis, data tools (e.g., SQL), operations, or trend identification.

Preferred qualifications:

  • JD, MBA, or Master’s degree.
  • Familiarity with AI ethics, responsible AI principles, and current debates surrounding AI safety and governance.
  • Knowledge of the geo-political landscape and current events affecting cloud platforms.
  • Knowledge of the technology sector, its trends, and key policy issues affecting the internet (e.g., intellectual property, free expression, online safety).
  • Understanding of AI/LLM systems, and ability to work with quality metrics and enforcement diagnostics, including FP/FN tracking, root cause analyses, and precision-recall tradeoffs.
  • Excellent problem-solving and critical thinking skills with attention to detail in an ever-changing environment.