Job Description :
Job description:   



Responsibilities:

·     Defining BIG DATA architecture and Implementing BIG DATA solutions using HDFS, MAP-REDUCE, PIG, HIVE, FLUME, SQOOP, ZOOKEEPER, HBASE, CASSANDRA.

·      ·     Experience with Spark is Mandatory (Expert level)

·     Defining business architecture to map the business requirements to Hadoop components

·     Architect, Configure and Manage Big Data environment

·     Performing multiple POC’s and recommendations for Next Gen Projects/products related to Big Data

Requirements:

·     10+ years of IT experience

·     7+ years of experience on Java

·     4+ years of experience on Big Data Technologies

·     BigData Administration skills is a plus

·     Should be able to work independently

·     Must have extensive knowledge on BIG DATA architecture [Hadoop, NoSQL and distributed computing]

·     Must have working experience with Hadoop and other components like Hive, Impala, Pig, Sqoop, Map-Reduce, Flume, H2O etc

·     Working knowledge on Spark (MLib, GraphX), Mesos, Marathon ecosystems

·     BigData security and governance implementation

·     Experience with AWS Cloud Technologies is a plus

·     SQL Development experience (T-SQL)  with SSIS is a must.

·     Analytical Skills

·     Good Team player

·     Good Communication Skills

·     Certifications in Hortonworks and Cloudera highly desired





Please send me your updated resume to my mail
<p><p><b>CST</b> provides its clients with complete, cost-effective, end-to-end personnel solutions across a range of industrial domains. <b>CST''s mission</b> is to empower businesses around the world to make better, faster operational decisions. </p> </p>