-
Design, develop, and maintain scalable and efficient data pipelines using Snowflake, Pyspark, and SQL.
-
Write optimized and complex SQL queries to extract, transform, and load data.
-
Develop and implement data models, schemas, and architecture that support banking domain requirements.
-
Collaborate with data analysts, data scientists, and business stakeholders to gather data requirements.
-
Automate data workflows and ensure data quality, accuracy, and integrity.
-
Manage and coordinate release processes for data pipelines and analytics solutions.
-
Monitor, troubleshoot, and optimize the performance of data systems.
-
Ensure compliance with data governance, security, and privacy standards within the banking domain.
-
Maintain documentation of data architecture, pipelines, and processes.
-
Stay updated with the latest industry trends and incorporate best practices.
-
Proven experience as a Data Engineer or in a similar role with a focus on Snowflake, Python, Pyspark, and SQL.
-
Strong understanding of data warehousing concepts and cloud data platforms, especially Snowflake.
-
Hands-on experience with release management, deployment, and version control practices.
-
Solid understanding of banking and financial services industry data and compliance requirements.
-
Proficiency in Python scripting and Pyspark for data processing and automation.
-
Experience with ETL/ELT processes and tools.
-
Knowledge of data governance, security, and privacy standards.
-
Excellent problem-solving and analytical skills.
-
Strong communication and collaboration abilities.
-
Expertise in CI/CD practices and implementation.
-
Strong financial background experience with regulatory knowledge and compliance requirements.