Job Description :
· 5+ years of work Experience in BigData/Hadoop Technologies.


· Strong experience with Hadoop ETL/Data Ingestion: Sqoop, Flume, Hive, Spark

· Experience in Hadoop Data Consumption and other components; Hive,Ambari, Spark,Hive,Kafka.

· Experience monitoring, troubleshooting and tuning services and applications and operational expertise such as good troubleshooting skills, understanding of system''s capacity, bottlenecks, basics of memory, CPU, OS, storage and networks

· Experience with open source configuration management and deployment tools such as Puppet or Chef and Scripting using Python / Shell / Perl / Ruby / Bash

· Good understanding of distributed computing environments
             

Similar Jobs you may be interested in ..