Job Description :
Hands on experience and proficiency in working with the Hadoop ecosystem, including: HDFS, Hive, Hbase, YARN, Sqoop, Oozie, Spark, Ambari, and Ranger. Passion for big data/clustered environments
Scripting experience in both of: Python, BASH
Experience working in Linux/Unix environments
Experience with SQL and/or NoSQL
Understanding of relational data models
Strong analytical and problem solving skills
Ability to work in a team environment to solve complex problems with little direction
Strong communication skills, both written and oral
Experience with GIT / Jenkins
Experience administrating the Hadoop stack, particularly HDP (HortonWorks)
Experience working with Kerberized environments
Experience working with clusters
Theoretical knowledge of big data/analytics concepts
Experience developing and troubleshooting ETL
Understanding of networking concepts.