Sr Data Engineer
We are looking for a Snowflake Data Engineer (3 5 years of experience) to architect, build, and scale our data platform. In this role, you wont just be writing queries; you will be responsible for the full-stack Snowflake ecosystem, including performance tuning, cost management, and leveraging the latest features like Snowpark and Cortex AI.
Key Responsibilities-
Warehouse Architecture: Design and implement highly performant Snowflake environments, including multi-cluster warehouses, resource monitors, and custom scaling policies.
-
Modern Data Pipelines: Build end-to-end ELT pipelines using Snowpipe, Streams, and Tasks for real-time and batch ingestion.
-
Snowpark Development: Use Snowpark (Python/Java) to migrate complex transformation logic from legacy systems directly into Snowflakes elastic compute.
-
Optimization & Tuning: Conduct deep-dive analysis into query profiles to optimize micro-partitioning, clustering keys, and materialized views.
-
Governance & Security: Implement robust security frameworks using Row-Level Security (RLS), Object Tagging, and Dynamic Data Masking.
-
Cost Management: Monitor credit consumption and implement strategies to ensure maximum ROI on Snowflake spend.
-
Experience: 3 5 years in Data Engineering, with at least 2+ years of dedicated Snowflake development.
-
Core Expertise:
-
Advanced SQL: Expertise in analytical functions, CTEs, and UDFs (User Defined Functions).
-
Data Modeling: Proficiency in Star Schema, Snowflake-specific optimizations (clustering)
-
Python/Snowpark: Strong ability to write Python code for data processing within the Snowflake environment.
-
-
Ecosystem Knowledge: Experience with dbt (data build tool) for modular SQL modeling and Git for version-controlled data pipelines.
-
Cloud Infrastructure: Familiarity with cloud storage (S3, Azure Blob, or GCS) and how Snowflake interacts with external stages.