Staff Security Engineer

Bangalore, Karnātaka, India February 25, 2026

Who We Are

Verve has created a more efficient and privacy-focused way to buy and monetize advertising. Verve is an ecosystem of demand and supply technologies fusing data, media, and technology together to deliver results and growth to both advertisers and publishers–no matter the screen or location, no matter who, what, or where a customer is. With 30 offices across the globe and with an eye on servicing forward-thinking advertising customers, Verve’s solutions are trusted by more than 90 of the United States’ top 100 advertisers, 4,000 publishers globally, and the world’s top demand-side platforms. Learn more at www.verve.com.

Who You Are

You are a security-minded engineer who thrives at the intersection of Cloud Governance, DevOps, and AI. You don't just find vulnerabilities; you build the systems that prevent them. You have a deep understanding of the Google Cloud ecosystem, specifically Google Security Command Center (SCC) and emerging AI technologies. You believe that "Infrastructure as Code" is the only way to scale and that AI environments must be "Auditable by Design." You are proactive and excited about the challenge of centralizing security policies across a global organization.

What You Will Do

  • Lead the transition from project-based policy management to an organization-level governance model (using GCP Folder/Org policies) to ensure a consistent security posture.

  • Design and implement auditable environments for AI workloads in GCP. This includes configuring emerging AI technologies with strict Data Access audit logs, VPC Service Controls, and Model Registry governance.

  • Design automated compliance checks and reporting using Google SCC and CI/CD pipelines to ensure we are always "audit-ready" for both standard cloud and AI resources.

  • Develop Infrastructure-as-Code (IaC) templates (Terraform) that consistently enforce organizational policies and AI safety guardrails during resource provisioning.

  • Configure and optimize automated monitoring within Google SCC to detect, alert, and respond to threats in real-time, including AI-specific risks like data poisoning or unauthorized model access.

  • Clearly define policy roles, ensuring teams understand the shared responsibility model for both general cloud services and specialized AI/ML platforms.

Apply on company site

How well do you match this role?

Check My Resume