Job Description :
Main Responsibilities
1. Hadoop Architecture and its ecosystem - 5 yrs?
2. Kafka
3. Nifi
4. Zookeeper
5. back-end programming, specifically java, JS, Node.js and OOAD – 5yrs
6. Writing high-performance, modularized, reliable and maintainable code. – 5 yrs
7. Ability to write MapReduce jobs, Hive QL and SPARK Scala, python – 5 yrs
8. Good knowledge of database structures, theories, principles, and practices. – 5yrs
9. Ability to write Pig Latin scripts. – 5yrs
10. Familiarity with data loading tools like Flume, Sqoop.
11. Knowledge of workflow/schedulers like Oozie.
12. Analytical and problem solving skills, applied to Big Data domain
13. Proven understanding with Hadoop, HBase, Hive, Pig, and HBase
14. Good aptitude in multi-threading and concurrency concepts
15. Knowledge of GIT/Jenkins and pipeline automation is must.

key skills :

Interview Process: Initial would be telephonic screening followed by in person interview (Charges are Not covered)