Job Description :
Detailed Job Description:
Should have in depth knowledge on Big Data eco systems. Should have at least few years of experience using Spark. Should have strong skills in Hadoop. Should be strong in Hive concepts partitioning, bucketing and query optimisation and different type of joins. Should be handson in at least one programming language like Java or Scala. Should have good analytical and problem solving skills. Should have excellent communication skills.

Minimum years of experience*:5+

Certifications Needed: No

Top 3 responsibilities you would expect the Subcon to shoulder and execute*:

1. Application Development and Maintenance on Big Data technologies

2. Requirement Analysis, testing and deployment

3. onsite offshore coordination
             

Similar Jobs you may be interested in ..