Job Description :
Minimum 2 years’ of experience in Big Data,

Strong Hadoop , Distributed Computing and Cluster Knowledge, Cloudera Distribution experience is a plus.

Good knowledge in Java and Oops concepts, Scala and Python good to have.

Spark ,Hive, Oozie ,Shell Scripting is must, Real time experience in Spark is must.

Oracle or any RDBMS Database knowledge and data warehousing with complex data transformation pipeline design/development knowledge is required,

BFS Experience with financial calculation knowledge is required.

Need to have Strong problem solving and performance improvement skill

Must have 5+ years of total experience.