Job Details
Responsibilities:
- Design and develop ETL pipelines using ADF for data ingestion and transformation.
- Collaborate with Azure stack modules like Data Lakes and SQL DW to build robust data solutions.
- Write SQL, Python, and PySpark code for efficient data processing and transformation.
- Understand and translate business requirements into technical designs.
- Develop mapping documents and transformation rules as per project scope.
- Communicate project status with stakeholders, ensuring smooth project execution.
Requirements Must have:
- 10-12 years of experience in data ingestion, data processing, and analytical pipelines for big data and relational databases.
- Hands-on experience with Azure services: ADLS, Azure Databricks, Data Factory, Synapse, Azure SQL DB.
- Experience in SQL, Python, and PySpark for data transformation and processing.
- Familiarity with DevOps and CI/CD deployments.
- Strong communication skills and attention to detail in high-pressure situations.
- Experience in the insurance or financial industry is preferred.
We are an equal opportunity employer. All aspects of employment including the decision to hire, promote, discipline, or discharge, will be based on merit, competence, performance, and business needs. We do not discriminate on the basis of race, color, religion, marital status, age, national origin, ancestry, physical or mental disability, medical condition, pregnancy, genetic information, gender, sexual orientation, gender identity or expression, national origin, citizenship/ immigration status, veteran status, or any other status protected under federal, state, or local law.