Senior MultiCloud DevOps/ Platform Engineer with HarnessBulgaria; Moldavia; Poland; Romania

United States April 14, 2026
Back to jobs

Senior MultiCloud DevOps/ Platform Engineer with Harness

Bulgaria; Moldavia; Poland; Romania
Apply

 

Hello, let’s meet!

 

Who We Are

While Xebia is a global tech company, our journey in CEE started with two Polish companies – PGS Software, known for world-class cloud and software solutions, and GetInData, a pioneer in Big Data. Today, we’re a team of 1,000+ experts delivering top-notch work across cloud, data, and software. And we’re just getting started.

What We Do

We work on projects that matter – and that make a difference. From fintech and e-commerce to aviation, logistics, media, and fashion, we help our clients build scalable platforms, data and AI solutions, and cutting-edge applications to shape the future of tech. Our clients include McLaren, Aviva, Deloitte, Spotify, Disney, ING, UPS, Tesco, Truecaller, AllSaints, Volotea, Schmitz Cargobull, Allegro, InPost, and many, many more.

We value smart tech, real ownership, and continuous growth. We use modern, open-source stacks, and we’re proud to be trusted partners of Databricks, dbt, Snowflake, Azure, GCP, and AWS. Fun fact: we were the first AWS Premier Partner in Poland!

Beyond Projects

What makes Xebia special? Our community. We support tech communities, organize meetups (Software Talks, Data Tech Talks), and have a culture that actively support your growth via Guilds, Labs, and personal development budgets — for both tech and soft skills. It’s not just a job. It’s a place to grow.

What sets us apart? 

Our mindset. Our vibe. Our people. And while that’s hard to capture in text – come visit us and see for yourself.

 

 

About Project

We are looking for a Senior DevOps / Platform Engineer with strong experience in modern CI/CD practices and cloud-native delivery. You will lead the design, implementation, and operation of delivery pipelines, infrastructure templates, and platform capabilities supporting applications across AWS/Azure/GCP environments.
A core responsibility for this role is owning and operating the Harness platform (CI/CD, Feature Flags, Cost Management), and enabling engineering teams to deliver fast, safe, and reliable deployments.

You will be:

owning and operating the Harness Platform by:

  • designing, implementing, and maintaining Harness pipelines for Kubernetes, ECS, serverless, and VM deployments, including canary/blue‑green strategies and automated rollbacks,

  • operating CI pipelines and shared build infrastructure, improving build performance and developer feedback loops,

  • configuring and managing Feature Flags to support progressive delivery and experimentation,

  • integrating Harness SRM/Chaos (if applicable) to support deployment verification, resilience testing, and error budget policies,

  • partnering with FinOps to leverage cost dashboards, budgets, and guardrails for cloud spend optimization;

engineering Delivery Pipelines, Environments, and Infrastructure by:

  • creating reusable pipeline templates, governance controls, and “paved roads” for application teams,
  • implementing secrets management, artifact versioning, and environment promotion flows (dev → test → staging → prod),
  • standardizing infrastructure provisioning with Terraform, Helm/Kustomize, CloudFormation, and ARM/Bicep,
  • supporting Git-based workflows (GitHub, GitLab, Azure Repos, Bitbucket) and applying GitOps practices (Argo CD/Flux) where appropriate;

strengthening Reliability, Security, and Compliance by:

  • embedding automated tests, security scans (SAST, DAST, dependency/image scanning, SBOM), and quality gates into CI/CD pipelines,

  • enforcing RBAC, least privilege, SSO/SCIM, and audit readiness across platforms,

  • contributing incident response, post-incident reviews, and the continuous evolution of SLIs/SLOs,

building and integrating Observability and Performance Tooling by:

  • integrating observability systems (Prometheus/Grafana, OpenTelemetry, Datadog, New Relic) into deployment verification and runtime dashboards,

  • optimizing reliability, building performance, caching, architecting storage, and runtime platform performance;

driving Collaboration and Enablement by:

  • onboarding product and engineering teams onto the Harness platform,

  • running enablement workshops, producing documentation, and maintaining self-service resources,

  • measuring and reporting delivery metrics such as lead time, deployment frequency, change fail rate, and MTTR — and driving improvement initiatives based on these insights.

Your profile:

  • ready to work in EST time (occasional overlap),
  • 5+ years in DevOps, Platform Engineering, or SRE roles,
  • 2+ years hands-on with Harness CI and/or CD, including pipelines-as-code, templates, governance, and rollout strategies,
  • practical experience using AI-powered assistants (e.g. Claude Code, GitHub Copilot, Cursor) to improve productivity, quality, or decision-making in software delivery,
  • strong experience with Kubernetes (operations, Helm/Kustomize, operators),
  • good proficiency with at least one major cloud (AWS, Azure, or GCP),
  • demonstrated expertise with Terraform, reusable modules, and multicloud provisioning (CloudFormation, ARM/Bicep),
  • hands-on experience with scripting (Bash, Python, or Go) and automation mindset,
  • experience with CI/CD and Git-based workflows, GitHub Actions or comparable CI tools,
  • familiarity with security integration (SAST/DAST, scanning, OPA/Conftest),
  • expertise with observability fundamentals (metrics, logs, traces),
  • ansible for configuration and orchestration,
  • upper intermediate/advanced English (B2/C1).

Work from the European Union region and a work permit are required. 

Nice to have:

  • GitOps (Argo CD/Flux),
  • Harness Feature Flags, SRM, Chaos, or Cloud Cost Management,
  • Kafka experience (operational or integration),
  • Elasticsearch cluster operations,
  • Redis (caching, broker patterns, session management),
  • FinOps exposure,
  • familiarity with compliance frameworks (SOC2, ISO27001, HIPAA, PCI),
  • SRE experience with SLOs, SLIs, and error budgets,
  • PKI, vaulting, workload identity solutions,
  • experience applying GenAI in a more structured way within the SDLC, including defined workflows, prompt patterns, or tool integrations embedded into daily work,
  • interest in and familiarity with emerging AI-driven practices (e.g. agent-based workflows, automation patterns, AI-augmented development), with a willingness to explore and experiment beyond standard approaches.

 

Recruitment Process:

CV review – HR call – InterviewClient Interview – Decision

 

Apply for this job

*

indicates a required field

First Name*
Last Name*
Email*
Phone
Country*
Phone*
Resume/CV*
AttachAttach
Dropbox
Google Drive
Enter manuallyEnter manually

Accepted file types: pdf, doc, docx, txt, rtf

Cover Letter
AttachAttach
Dropbox
Google Drive
Enter manuallyEnter manually

Accepted file types: pdf, doc, docx, txt, rtf


LinkedIn Profile
Where did you find this job offer?*
Select...
Rate your proficiency in the each of following AWS/ Azure/ GCP GitOps Python Kubernetes EKS/AKS/GKS Harness Terraform on a scale of 1 to 5, where: 1 - Beginner (basic knowledge, limited practical experience) 2 - Junior (some practical experience, still learning) 3 - Intermediate (comfortable using it independently in projects) 4 - Advanced (deep understanding, can optimize and solve problems) 5 - Expert (can mentor others, design complex solutions)*
What is your experience of the cloud (which one do you use, for how long)? *
What is your experience with Harness? *
Are you available to work on a daily basis according to EST time (occasional overlap)? *
Select...
Do you speak English at a minimum B2 level?*
Select...
What country do you currently reside in?*
Select...
What is your notice period?*
Select...
What is your preferred form of cooperation?*
Select...
Based on your preferred form of cooperation (per hour or monthly) what are your financial expectations?*
Do you have documents entitling you to work in the European Union (valid work permits to work in the EU)?*
Select...
I declare that I agree to the processing of my Personal Data contained in the content of documents sent in response to the job/cooperation offer, and Personal Data collected during a possible recruitment interview, in order to participate in future recruitment processes conducted by the Administrator, i.e. Xebia sp. z o.o. with its registered office in Wrocław.*
Select...
I declare that I agree to sending to my e-mail address indicated in the content of recruitment documents, any information about recruitment processes conducted by the Administrator, i.e. Xebia sp. z o.o. with its registered office in Wrocław.*
Select...
The administrator of Personal Data is Xebia sp. z o.o. with its registered office in Wrocław, ul. Sucha 3, 50-086 Wrocław, KRS: 0000978067, NIP: 8971719181, REGON: 020363023 with a share capital of PLN 37 168 600.00. Your data contained in the CV will be processed only for recruitment purposes. The legal basis for the processing of your personal data is art. 221 cl. 1 of the Labour Code. If you provide separate consent, we will process your personal data also for future recruitment purposes. You have the right to access your personal data, to correct them, to remove them, to restrict their processing, to transfer your data, to submit an objection, to withdraw consent to data processing any time without affecting the lawfulness of processing carried out on the basis of the consent before it was withdrawn. In order to exercise the abovementioned rights, please send an e-mail with your request to: [email protected]. If you believe that your data are processed illegally, you can submit a complaint to the supervisory body with its registered office in ul. Stawki 2, Warsaw. We may only disclose your personal data if you provide consent thereto or to authorised bodies, when necessary.*
Select...
Apply on company site

How well do you match this role?

Check My Resume