DataStream, ETL Fundamentals, SQL, SQL (Basic + Advanced), Python, Data Warehousing, Time Travel and Fail Safe, Snowpipe, SnowSQL, Modern Data Platform Fundamentals, PLSQL, T-SQL, Stored Procedures
Specialization
Snowflake Engineering: Data Engineer
Job requirements
Job Title: Senior Data Engineer – DBT & Python Experience: 5+ Years Job Summary We are seeking a highly skilled Senior Data Engineer with strong expertise in DBT, Python, and SQL to design, develop, and optimize scalable data pipelines and ETL processes. The ideal candidate should have hands-on experience in building robust data transformation workflows, working with Snowflake, and leveraging cloud platforms for analytics and reporting solutions. Key Responsibilities • Design, develop, and maintain DBT models, transformations, and SQL code for analytics and reporting. • Build scalable and efficient ETL pipelines using DBT and other relevant tools. • Write complex and high-performance SQL queries to process and analyze large datasets. • Develop clean, scalable, and efficient code using Python, with strong hands-on experience in Pandas and NumPy. • Optimize data pipelines for performance, scalability, and cost efficiency. • Troubleshoot and resolve ETL and data pipeline performance issues. • Develop scripts using Unix Shell scripting, Python, and other scripting tools for data extraction, transformation, and loading. • Write and optimize Snowflake SQL queries and support Snowflake implementations. • Work with orchestration tools such as Airflow or similar workflow management platforms. • Integrate user-facing elements into applications where required. • Collaborate with internal stakeholders to understand business requirements and translate them into technical solutions. • Ensure data quality, validation, and testing within DBT workflows. Required Skills & Qualifications • 6+ years of overall IT experience. • Proven hands-on experience with DBT (Data Build Tool) including model development, transformations, and testing. • Strong programming expertise in Python (mandatory). • Advanced proficiency in SQL, including writing complex queries on large datasets. • Hands-on experience in designing and maintaining ETL pipelines. • Experience in Unix Shell scripting. • Strong understanding of data warehousing concepts and best practices. Preferred Qualifications • Experience with Snowflake implementation and optimization. • Knowledge of Salesforce CDP. • Experience with Airflow or other data orchestration tools. • Exposure to cloud platforms such as AWS, GCP, or Azure. • Experience working with cloud storage solutions like S3, GCS, or Azure Blob Storage.
Tailor your resume for each Brillio role by matching the exact technology keywords from the job description — Azure Data Factory, Microsoft Fabric, Power BI, Snowflake, and PySpark appear across many current openings, and Lever's search algorithms rely on precise keyword matches.
Position yourself as a consultant, not just a technician — highlight client-facing experience, stakeholder communication, and business outcome delivery in your resume and interview responses, since Brillio's model demands both technical depth and consulting soft skills.
You're here early and that means a lot! ResumeGeni is in Beta which means it isn't 100% finished yet, but every day I'm working towards the vision. It's my goal to help you land your next job.
During the Beta, you'll get access to the state of the art AI to help find your perfect match, and when you do ResumeGeni will prepare your resume to get you to the interview.
There's still a lot to do. Please reach out if you have feedback, a request, or just want to send me a note.