Job Description :
Hands on experience with Hadoop applications Development
Familiarity with Hadoop Administration and cluster management including administration, configuration management and troubleshooting and performance tuning.
Expertise in Java is a MUST
Proficient with Unix/Linux (shell scripts, configuration management and OS tuning)
Proficient with configuration management/automation tooling (Puppet/Chef/Salt, building/assembling packages/Maven/Jenkins)
Excellent command of SQL – best practices, optimization, troubleshooting, debugging
Hands on Experience with Hadoop technologies (YARN, Hive/Hive2, MR, Tez, Sqoop/Sqoop2, Flume, Spark, Pig, Kafka, Storm etc
Knowledge of Hadoop Security and Kerberos.
Exposure to NoSQL Databases like, HBase, Cassandra and MongoDB.
Knowledge of traditional data analytics warehouses;
Experience in benchmarking Hadoop Eco systems, analyzing system bottlenecks and propose solutions to eliminate them;
Be able to clearly articulate pros and cons of various technologies and platforms in Big Data;
Be able to document use cases, solutions and recommendations;
Excellent written and verbal communication skills;
Liaison with Project Managers and other solution architects in planning and governance activities related to the project
Knowledge of the open source community (opening issues, tracking issues and identifying problematic issues ahead of time by tracking open JIRA issues in the community)
Experience with version control and continuous integration (Git, Bamboo, Jenkins)
Understanding of Networking (tracing, packet capture, etc
Exposure to design and development of workflows (eg: using Oozie etc
Knowledge of Scala and Python is good to have.