Senior Data Platform Developer
About the role In this role, you will have the opportunity to design, build and maintain scalable, secure, and high-performance data platforms that enable advanced analytics, data science, business intelligence, and AI workflows for geoscience data. You’ll play a key role in shaping the technical direction, tooling, and driving best practices in data engineering and platform architecture from the ground up. In this role, you will have the opportunity to: Design, build, and maintain cloud‑based data platforms. Create and manage scalable data pipelines and infrastructure for ingesting, validating, storing, transforming, and processing data from various internal and external sources, using batch and real-time methods. Move data from various sources into data warehouses or data lakes. Develop and maintain CI/CD pipelines and observability tools to automate deployments and testing. Consume, design, develop, publish, and support internal and external web APIs to provide interface to geoscience data. Develop and optimize data models to support analytical workloads. Provide reliable, well-structured datasets and semantic layers for BI tools, dashboards, advanced analytics, and AI/ML use cases. Ensure continuous monitoring of data platform health, performance, and availability. Provide operational tier 2/3 support, in conjunction with the Cloud Ops team, for data platform issues. Implement and enforce data quality standards and security best practices. Provide technical guidance and mentorship, help establish best practices and coding standards. Work with stakeholders to align data engineering initiatives with business strategy. To be successful in this role, you should have: Bachelor’s (or Master’s) degree in computer science, data engineering, or related field. 7+ years of experience in data platform, data engineering, or data architecture roles. Experience with enterprise-scale data platforms for managing structured, semi-structured and unstructured data. Experience designing, building, and maintaining Data Lakes and/or Data Warehouses in production environments. Experience with relational and no-SQL data storage mechanisms. Strong proficiency in SQL, Python, JSON and distributed data processing frameworks (i.e. Spark). Experience with cloud platforms and data services (i.e. Databricks, Snowflake). Knowledge of data modelling, ETL/ELT processes, and data warehousing concepts. Familiarity with containerization (i.e. Docker, Kubernetes), CI/CD pipelines, and infrastructure-as-code. Experience with streaming technologies (i.e. Kafka, Event Hub) and API integrations. Strong communication and stakeholder management abilities. Additional Information Office-based working environment, work from our Toronto office two or more days per week. #LI-NP1