Job Description :
Technical Data Architect
  • 3 5 years of experience in data engineering.
  • Strong experience with distributed data processing (Spark, AWS Glue, EMR, or equivalent).
  • Hands-on expertise with data modeling, ETL pipelines, and performance optimization.
  • Strong hands-on expertise in building and optimizing ETL pipelines into Amazon Redshift
  • Proficiency in Python, PySpark and SQL; familiarity with Iceberg tables preferred.
  • Solid background in Data Analysis and Data Warehousing concepts (star/snowflake schema design, dimensional modeling, and reporting enablement).
  • Orchestration experience with Airflow, Step Functions, and Lambda
  • Experience with Redshift performance tuning, schema design, and workload management.
  • Cloud experience (AWS ecosystem preferred).
             

Similar Jobs you may be interested in ..