Job Description :

Job Role: Data Engineer

Location: 100% Remote

Duration: 12 Months Contract

Job Description:

Primary Skills:

(Python, Pyspark, ETL, SQL, Hive) + Cloud (preferred – GCP - dataflow, and Big Query)

Responsibilities

  • Python Programming - Understand and execute object oriented programming and Understand definitions, classes, modules
  • PySpark Programming - Develop ,execute native PySpark code and understand PySpark SQL Module
  • Databricks - Use ADLS Gen2 Azure storage ,Delta Table functionality, Notebooks
  • Database setup and use
  • Extract, Transform and Load
  • Understand ETL concepts including slowly changing objects
  • Implementing CI/CD
  • Designing ETL and ELT using python and Pyspark  

Minimum qualifications

  • BE/B Tech/MCA
  •  Solid experience in consulting or client service delivery experience on Azure
  • Good experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases, NoSQL and data warehouse solutions
  • Extensive experience providing practical direction with Python and Pyspark programming  

Preferred qualifications

  • Data Architect with intensive designing & programming background in BI, Reporting, ETL tools, big data technologies & RDBMS systems.
  •  Extensive IT experience in numerous technologies spanning across Data Modeling, Data Warehouse Management and Database Development for different domains like Insurance, Reinsurance, Microsoft Advertising Systems.
  • Solid understanding of Data Life Cycle including Profiling, Mining, Migration, Quality, Integration, Master Data Management and Metadata Management Services.
             

Similar Jobs you may be interested in ..