Job Description :

Job Description :

 

Primary Skills: Python, Scala Programming language, Nifi, Sqoop, Hadoop, Google Cloud Platform, Spark, Kafka, HDFS, Hive

 

Secondary Skills: Java, Unix Scripts, Oozie, Google Cloud Big Query & Big Table, Google Data Proc, Google Data Flow, Google Cloud Storage

 

 

  • Should have 2+ yrs of experience preferably from Telecom Domain
  • Big data expert with 8+ years of experience in Hadoop Big data ecosystem

  • Experience in cloud environment, specially GCP

  • Experience in developing both batch and real-time streaming data pipelines

  • Have experience as a tech lead for data engineering projects

  • Writes complex SQL queries required to perform Data Acquisition and Ingestion required for Data pipelines

  • Builds Data pipelines and does data engineering activities using technologies like Python, Hadoop, Spark etc.,

  • Ensures the upkeep of the Hadoop Data Lake Platform by monitoring the Horton Works HDFS

  • Monitors the Data Lake constantly and ensures that the appropriate support teams are engaged at the right times

  • Works in an Agile/Scrum Environment, interacts with a scrum team as well as the Client Stakeholders

  • Understands the client requirements from Agile scrum user stories and develops low level design required for the user stories

  • Result Oriented and able to match the pace of work demands of the Program through self-improvement

  • Preferably Google Cloud Certified Data Engineer

  • Obsessively focused on coding standards and code quality

             

Similar Jobs you may be interested in ..