Job Description :
Hadoop Application with hands-on Architecture and Development experience building analytical applications using the Hadoop ecosystem. This experience should include:

· Architecting and designing Hadoop data architectures, including HBase and Hive database design, and data processing/transformation using Spark, NiFi, and/or Pig

· Architecting and designing large scale Hadoop infrastructures, providing for scalability and availability

· Should have worked on Hortonworks/Cloudera distributions

· Develop Hadoop components using Scala, Java, Python, Map Reduce or other languages/tools. Scala Preferred.

· Hadoop performance tuning and optimization

· Experience with Security Architecture Setup of Hadoop environments using LDAP, AD, and Ranger

· Experience with common Hadoop file formats including Avro, Parquet, and ORC

· Experience with data ingestion from Kafka, and the use of such data for streaming analytics

· Experience with many of the following: Oozie, Zookeeper, Sqoop, Flume

· Working knowledge of Hadoop data consumption via SQL front-ends, BI tools would be a big plus

· Strong communication skills, both written and verbal

· Ability to train and coach an Agile team in Hadoop skills and best practices

Responsibilities:

· Data flows design from Kafka Event Streaming to HDFS, HBase, Hive data stores and Relational Data Mart (SQL Data Warehouse)

· Support Scrum teams day to day helping code reviews and detail design walkthrough

· Be hands-on to perform POCs and Tools evaluation

· Performance Tuning effort

· Infrastructure sizing/capacity planning