Job Description :
Responsibilities:
Deploying a hadoop cluster, maintaining a hadoop cluster, adding and removing nodes using cluster monitoring tools, configuring the NameNode high availability and keeping a track of all the running hadoop jobs.
Implementing, managing and administering the overall hadoop infrastructure.
Takes care of the day-to-day running of Hadoop clusters
Work closely with the database team, network team, BI team and application teams to make sure that all the big data applications are highly available and performing as expected.
Working with open source Apache Distribution have to manually setup all the configurations- Core-Site, HDFS-Site, YARN-Site and Map Red-Site. However, when working with popular hadoop distribution like Hortonworks, Cloudera or MapR the configuration files are setup on startup and the hadoop admin need not configure them manually.
Responsible for capacity planning and estimating the requirements for lowering or increasing the capacity of the hadoop cluster.
Responsible for deciding the size of the hadoop cluster based on the data to be stored in HDFS.
Resource and security management.
Troubleshooting application errors and ensuring that they do not occur again.

Preferred Skills:
Excellent knowledge of UNIX/LINUX OS because Hadoop runs on Linux.
Experience in medium to large scale Hadoop cluster management.
Work experience in Kerberized cluster.
Knowledge of high degree configuration management and automation tools like Puppet or Chef for non-trivial installation.
Knowledge of cluster monitoring tools like Ambari, Ganglia, or Nagios.
Knowing of core java is a plus for a Hadoop admin but not mandatory.
Good understanding of OS concepts, process management and resource scheduling.
Basics of networking, CPU, memory and storage.
Good hold of shell scripting
A knack of all the components in the Hortonworks Data Platform like Ambari, Ranger, Hive, HBase, Spark, Kafka, etc