Sr Data Engineer
We are seeking a high-caliber Data Engineer with 3 5 years of experience to join our Data Platform team. You will be responsible for building and maintaining a unified "Lakehouse" architecture. The ideal candidate is an expert in Snowflake for high-concurrency warehousing and Databricks for complex, large-scale data processing and engineering.
Key Responsibilities-
Pipeline Architecture: Design and implement end-to-end ELT/ETL pipelines using the Medallion Architecture (Bronze/Silver/Gold) across Databricks and Snowflake.
-
Processing Mastery: Use PySpark and Databricks SQL for heavy-lift data engineering, cleansing, and unstructured data processing.
-
Warehouse Optimization: Architect and manage Snowflake environments, including virtual warehouses, clustering keys, and materialized views for sub-second query performance.
-
Data Governance: Implement security and governance frameworks using Unity Catalog (Databricks) and Role-Based Access Control (Snowflake).
-
Integration & Orchestration: Synchronize data between Delta Lake and Snowflake using tools like Snowpipe, dbt, or Airflow.
-
Cost Management: Monitor and optimize credit consumption in both platforms to ensure architectural efficiency.
-
Experience: 3 5 years in a Data Engineering role with a focus on cloud-native stacks (AWS, Azure, or GCP).
-
Databricks Skills: Deep proficiency in Spark (Python/Scala), Delta Lake, and experience with Delta Live Tables (DLT).
-
Snowflake Skills: Expert knowledge of Snowpark, Streams & Tasks, Zero-Copy Cloning, and Data Sharing.
-
SQL: Advanced mastery of analytical SQL (window functions, complex joins, and recursive CTEs).
-
Tooling: Hands-on experience with dbt (Data Build Tool) and version control via Git.