Key Responsibilities:
Architect and maintain high-performance data pipelines across diverse data sources.
Build scalable data processing solutions on Snowflake, AWS, and GCP.
Implement ML algorithms for data quality monitoring and anomaly detection.
Develop data models, enforce data governance, and maintain documentation.
Collaborate with analysts, data scientists, and stakeholders to deliver actionable insights.
Required Skills:
14+ years in ETL/ELT, data warehousing, and data modeling.
Advanced SQL and Python; strong performance tuning and query optimization.
Expertise with Snowflake, AWS services (S3, Lambda, Data Pipeline), and Apache Airflow.
Experience with Databricks or IBM DataStage a plus.
Strong understanding of cloud architecture, CI/CD, and distributed data processing (Spark preferred).
Education:
Bachelor’s degree in Computer Science, IT, or related field.
Equal employment opportunity:
We are an equal opportunity employer. All aspects of employment including the decision to hire, promote, discipline, or discharge, will be based on merit, competence, performance, and business needs. We do not discriminate on the basis of race, colour, religion, marital status, age, national origin, ancestry, physical or mental disability, medical condition, pregnancy, genetic information, gender, sexual orientation, gender identity or expression, national origin, citizenship/ immigration status, veteran status, or any other status protected under federal, state, or local law.