Job Description :
6+ years of hands-on experience in handling large-scale distributed platforms and integration projects.
6+ years of experience with Linux / Windows, with basic knowledge of Unix administration
1+ years of experience administering Hadoop cluster environments and tools ecosystem: Cloudera/Horton Works/Sqoop/Pig/HDFS
Experience in whole Hadoop ecosystem like HDFS, Hive , Yarn, Flume, Oozie, Flume, Cloudera Impala, Zookeeper, Hue, Sqoop, Kafka, Storm, Spark and Spark Streaming including Nosql database knowledge such as Hbase, Cassandra and/or MongoDB
Familiar with Spark, Kerberos authorization / authentication, LDAP and understanding of cluster security
Exposure to high availability configurations, Hadoop cluster connectivity and tuning, and Hadoop security configurations
Expertise in collaborating with application teams to install the operating system and Hadoop updates, patches, version upgrades when required.
Experience working with Load balancers, firewalls, DMZ and TCP/IP protocols.
Understanding of Enterprise IT Operations practices for security, support, backup and recovery
Good understanding of Operating Systems (Unix/Linux), Networks, and System Administration experience
Good understanding of Change Management Procedures
Experience with hardware selection, environment sizing and capacity planning
Knowledge of Java, Python, Pig, Hive, or other languages a plus

PREFFERED QUALIFICATIONS

Experience in working with RDBMS and Java
Exposure to NoSQL databases like MongoDB, Cassandra etc.
Experience with cloud technologies(AWS)
Certification in Hadoop Operations or Cassandra is desired