Principal Python AI Engineer

Waterloo, Ontario April 10, 2026 Full Time

OPENTEXT - THE INFORMATION COMPANY

OpenText is a global leader in information management, where innovation, creativity, and collaboration are the key components of our corporate culture. As a member of our team, you will have the opportunity to partner with the most highly regarded companies in the world, tackle complex issues, and contribute to projects that shape the future of digital transformation.

 

AI-First. Future-Driven. Human-Centered.

At OpenText, AI is at the heart of everything we do—powering innovation, transforming work, and empowering digital knowledge workers. We're hiring talent that AI can't replace to help us shape the future of information management. Join us.

 


YOUR IMPACT

Join the OpenText Cloud Services team as a Principal AI Developer and play a pivotal role in shaping the future of AI in the cloud. Our Cloud Services & Enablement organization builds the secure, scalable, and efficient MLOps foundations that power OpenText’s industry‑leading SaaS applications.

In this role, you will help bring cutting-edge AI/ML models into production at scale—driving innovation, accelerating delivery, and enabling customers to unlock the full value of their information.

Your work will directly support our mission to become the world’s leading Information Management company, where information and AI empower every person and organization to reach their full potential.


WHAT THE ROLE OFFERS

  • Design and implement scalable, secure MLOps pipelines using modern CI/CD practices and cloud-native technologies.
  • Automate the deployment, monitoring, and governance of machine learning models across multiple production environments.
  • Partner with Data Scientists, ML Engineers, and DevOps teams to streamline model development, training, validation, and release processes.
  • Manage and optimize ML infrastructure—including GPU/TPU clusters, distributed training environments, and high-performance model serving platforms.
  • Diagnose and resolve complex issues across ML workflows, infrastructure, and distributed systems with a focus on reliability, performance, and cost-efficiency.
  • Develop and maintain infrastructure-as-code (IaC) for ML environments using tools such as Terraform, CloudFormation, or Helm.
  • Contribute to internal MLOps frameworks, standards, and best practices that scale across the organization.

WHAT YOU NEED TO SUCCEED

  • 5+ years of experience in DevOps, SRE, or Cloud Engineering, including at least 2+ years focused on MLOps or ML infrastructure.
  • Deep expertise with at least one major cloud platform (AWS, GCP, or Azure) and its ML ecosystem.
  • Hands-on experience with ML lifecycle platforms such as MLflow, Kubeflow, SageMaker, Vertex AI, or Azure ML.
  • Strong understanding of container orchestration (EKS/GKE/AKS) and modern model serving frameworks (e.g., vLLM, TorchServe).
  • Proficiency with CI/CD tools (GitLab CI/CD, Argo Workflows, Flux) and infrastructure automation (Terraform, Helm, Ansible).
  • Strong programming and scripting skills in Python, Bash, and YAML.
  • Experience with observability and monitoring stacks like Prometheus, Grafana, ELK, or CloudWatch.

 

 


 

OpenText's efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws.

If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please contact us at [email protected]. Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenText's vibrant workplace.

 

Apply on company site

How well do you match this role?

Check My Resume