Job Description :
Job Description :
3 years of hands-on experience in the Hadoop ecosystem (HDFS, YARN, Map Reduce, Oozie, AND Hive)
1 year of hands-on experience in Spark core AND Spark SQL
5 years of hands-on programming experience in either core Java OR Spark
3 years of hands-on experience in Data Warehousing AND Data Marts AND Data/Dimensional Modeling AND ETL
1 years of hands-on experience in HBase OR Cassandra OR any other NoSQL DB
Understanding of Distributed computing design patterns AND algorithms AND data structures AND security protocols

Desired Skills

Understanding of Kafka AND Spark Streaming
Experience in any one of the ETL tools such as Talend, Kettle, Informatica OR Ab Initio
Exposure to Hadoop OR NoSQL performance optimization and benchmarking using tools such as HiBench OR YCSB
Experience in performance monitoring tools such as Ganglia OR Nagios OR Splunk OR DynaTrace
Experience on continuous build and test process using tools such as Maven AND Jenkins
Certification in HortonWorks OR Cloudera preferred BUT NOT MANDATORY