Sr. Staff SDET (Analytics)
Netradyne harnesses the power of Computer Vision and Edge Computing to revolutionize the modern-day transportation ecosystem. We are a leader in fleet safety solutions. With growth exceeding 4x year over year, our solution is quickly being recognized as a significant disruptive technology. Our team is growing, and we need forward-thinking, uncompromising, competitive team members to continue to facilitate our growth.
About Netradyne
Netradyne provides AI-powered technologies for fleet management and safer roads. An award-winning industry leader in fleet safety and video telematics solutions, Netradyne empowers thousands of commercial fleet customers across North America, Europe, and Asia to enhance their driver performance, reduce risk, and optimize operations.
Netradyne sets the standard among transportation technology companies for enhancing and sustaining road safety, with an industry-leading 25+ billion miles vision-analyzed for risk and an industry-first driver scoring system that reinforces safe behaviors. Founded in 2015, Netradyne is headquartered in San Diego with offices in San Francisco, Nashville, the UK and Bangalore.
For more details visit: www.netradyne.com
Role Overview
As a Sr. Staff SDET in the Analytics team, you will own the quality engineering strategy, automation architecture, and reliability validation for our offline batch data pipelines and end-to-end analytics/KPI outputs. This is a hands-on technical leadership role focused on building high-signal, deterministic validation that is deeply integrated into Jenkins CI/CD, along with strong test data management and leadership-grade visibility via dashboards.
You will design and standardize a PyTest-based validation framework (plus internal libraries + CLI tooling), establish quality gates that prevent silent regressions, and drive cross-team adoption of quality standards that materially improve reliability, correctness, and release confidence.
Key Responsibilities
Batch Data Pipeline Regression Automation
-
Design and implement automated regression validation for offline batch pipelines covering correctness (joins/aggregations/reconciliation), integrity (null/uniqueness/referential), and cross-table/time-window consistency.
-
Standardize validation approaches: golden datasets, snapshot/deterministic diff checks, and contract-based checks across pipeline boundaries.
-
Build reusable Python validation libraries and utilities that teams can extend across pipeline families.
-
Define strategies to handle expected variance (e.g., late-arriving data) without weakening correctness guarantees.
CI/CD for Data Pipelines – Jenkins
-
Build and operationalize Jenkins-based CI/CD gates for batch pipelines: pre-merge validations, nightly regressions, pre-prod checks, and release/promotions.
-
Improve CI signal quality by reducing flakes, ensuring deterministic execution, and producing actionable diagnostics (logs/artifacts/metadata).
-
Optimize runtime and reliability via tiered suites (smoke vs regression) and smart execution (parallelization/test selection).
-
Publish standardized CI reporting (pass/fail trends, top failure causes, time-to-detect/time-to-triage).
Test Data Management
-
Own test data strategy for analytics validation: curated datasets, versioning/lifecycle, refresh cadence, retention, and reproducibility standards.
-
Establish best practices for deterministic fixtures, controlled dataset updates, and safe data handling (masking/anonymization as needed).
-
Enable teams to run validations reliably in CI and pre-prod without ad-hoc setup.
End-to-End Analytics & KPI Correctness
-
Implement automated KPI correctness checks: metric-definition compliance, aggregation sanity, source reconciliation, and regression detection on KPI distributions.
-
Build audit-style end-to-end validation from source → transforms → warehouse → KPI outputs → dashboards/reports.
-
Standardize KPI validation so new pipelines/KPIs inherit guardrails by default.
-
Deliver self-serve dashboards for pipeline health, data quality health, and KPI correctness signals used for release readiness.
Incident RCA, Prevention & Reliability Engineering
-
Lead investigations for pipeline failures/missed SLAs and KPI regressions; drive closure through preventive guardrails.
-
Convert incident learnings into automated checks, stronger gates, monitoring/alerts, and clear runbooks with ownership.
-
Track reliability metrics and demonstrate measurable reduction in repeat incidents and improved MTTR.
Cross-team Quality Leadership (Sr. Staff Expectation)
-
Partner with Data Engineering, Analytics, Platform, and Release stakeholders to define quality standards and release readiness criteria.
-
Mentor engineers/SDETs on validation architecture, best practices, and raising the bar for “done” on analytics pipelines.
-
Drive org-wide quality initiatives such as shared libraries, consistent validation tiering, and unified reporting.
Requirements (Mandatory Skills)
-
Bachelor’s/Master’s in Computer Science, Electrical Engineering, or equivalent practical experience.
-
8–12 years in SDET / Quality Engineering / Automation with ownership of automation frameworks, CI/CD pipelines, and reliability improvements at scale.
-
Strong Python skills (OOP, debugging, DS/Algo) with experience building maintainable libraries/frameworks (not only test scripts).
-
Deep hands-on experience with PyTest framework design (fixtures/parametrization/plugins), suite structuring, and test diagnostics (artifacts/logging).
-
Strong SQL skills and experience validating analytics data in Snowflake and relational databases (Postgres/MySQL or equivalent), including reconciliation logic and correctness checks.
-
Proven experience building and operating Jenkins pipelines for gating, repeatable execution, and reliable reporting.
-
Strong test data management expertise: curated datasets, versioning, reproducibility, refresh strategy, CI-friendly execution.
-
Excellent communication and stakeholder management: can translate quality risks into clear execution plans and drive cross-team alignment.
Preferred Skills
- C++ testing with GTest (preferred; not mandatory).
- Experience with pipeline observability/monitoring (freshness/completeness/anomaly checks, alerting, dashboarding).
- Familiarity with reliability practices (incident management, postmortems, prevention mechanisms, SLAs/SLOs).
- Experience building internal developer tooling (CLI tools, lightweight services, reusable libraries).
We are committed to an inclusive and diverse team. Netradyne is an equal-opportunity employer. We do not discriminate based on race, color, ethnicity, ancestry, national origin, religion, sex, gender, gender identity, gender expression, sexual orientation, age, disability, veteran status, genetic information, marital status, or any legally protected status.
If there is a match between your experiences/skills and the Company's needs, we will contact you directly.
Netradyne is an equal-opportunity employer.
Applicants only - Recruiting agencies do not contact.
Recruitment Fraud Alert!
There has been an increase in fraud that targets job seekers. Scammers may present themselves to job seekers as Netradyne employees or recruiters. Please be aware that Netradyne does not request sensitive personal data from applicants via text/instant message or any unsecured method; does not promise any advance payment for work equipment set-up and does not use recruitment or job-sourcing agencies that charge candidates an advance fee of any kind. Official communication about your application will only come from emails ending in ‘@netradyne.com’ or ‘@us-greenhouse-mail.io’.
Please review and apply to our available job openings at Netradyne.com/company/careers. For more information on avoiding and reporting scams, please visit the Federal Trade Commission's job scams website.