Job Description :
Working experience with HDFS, Yarn, Kafka, Spark, HBase, Phoenix, Hive, Presto or equivalent large-scale distributed systems technologies
Should have hands on experience on Big Data technologies like Hadoop, Kafka, Apache Nifi, Hive,Hbase, Sqoop,Spark,OOZIE and Pig etc,
Experience with modern data pipelines, data streaming, and real time analytics using tools such as Apache Kafka, AWS kinesis, Google Cloud Services, Spark Streaming, ElasticSearch, or similar tools.
Manage all aspects of dataset creation and curation, including the frameworks to derive metrics from Hadoop, Oracle EDW, AWS, and Google cloud stack.
Experience with Linux Administering deploying/configuration/cluster management from Hadoop/spark/Kafka perspective
Experience in any of Puppet, Chef, Ansible or other devops tools
Experience in any of the monitoring tools like Nagios, Graphite, Zabbix etc.,
Good experience in any of the scripting/programming languages: Shell, Perl, Python, GoLang, Java etc .,
A continuous learner and a critical thinker
A team player with great communication skills
Experience or solid interest in maintaining a highly available service on production scale
             

Similar Jobs you may be interested in ..