Job Description :

Job Description

  • Overall 10+ years of experience.
  • Hands on Experience in Hadoop Stack of Technologies ( Hadoop ,Spark, HBase, Hive , Pig , Sqoop, Scala ,Flume, HDFS , Map Reduce).
  • Hands on experience with Python & Kafka .
  • Good understanding of Database concepts , Data Design , Data Modeling and ETL.
  • Hands on in analyzing, designing, and coding ETL programs which involves Data pre-processing , Data Extraction , Data Ingestion , Data Quality ,Data Normalization & Data Loading.
  • Working experience in delivering projects in Agile Methodology and hands on in Jira.
  • Experience in Client Facing Roles with good communication & thought leadership skills to co-ordinate deliverables across the SDLC .
  • Good Understanding of Teradata & Informatica .
  • Good understanding of Machine Learning Models and Artificial Intelligence preferred.
  • Good understanding of Data Components , Data Processing & Data Analytics on AWS is good to have .
  • Experience with data modeling tools like Erwin is good to have.

We are an equal opportunity employer. All aspects of employment including the decision to hire, promote, discipline, or discharge, will be based on merit, competence, performance, and business needs. We do not discriminate on the basis of race, color, religion, marital status, age, national origin, ancestry, physical or mental disability, medical condition, pregnancy, genetic information, gender, sexual orientation, gender identity or expression, national origin, citizenship/ immigration status, veteran status, or any other status protected under federal, state, or local law.

             

Similar Jobs you may be interested in ..