Job Description :
Job description
1) 5+ years of experience in Bigdata Hadoop, Spark, Hive, Pig and Map reduce
2) Strong understanding on distributed system and high availability
3) Ability to provide solution with latest big data technologies
4) Experience in implementing big data projects for more than 5 years
5) Experience in AWS or GCP for big data projects
6) Expertise on implementing data security in big data platform
7 ) Experience in Scala, Python or Java programming or any scripting language
8) Knowledge and experience in stream analytics
9) Very strong understanding on multiple file formats like Parquet, Avro etc, and what is good to use for which scenario
10) Strong understanding on compression techniques and how this will be used in Hadoop
11) Experience in performance tune, bench marking exercise
             

Similar Jobs you may be interested in ..